The Infosys Utilities Blog seeks to discuss and answer the industry’s burning Smart Grid questions through the commentary of the industry’s leading Smart Grid and Sustainability experts. This blogging community offers a rich source of fresh new ideas on the planning, design and implementation of solutions for the utility industry of tomorrow.

September 28, 2018

เว็บพนันบอล m88

Change Challenge

With the many disrupters affecting all aspects of our life, change is a constant (see my previous blogs). However, at the heart of any change are people, and change can be frightening for many, leading to uncertainty and inefficiency. All too often good ideas have been lost through poor management of change.

Effective management of any change is therefore vital. Understanding the current issues facing a business seems a very basic task for all implementing change, but problems can be hidden and the underlying root causes obscured by people protecting their current ways of working. This can be for many reasons, some related to maintaining their perceived status, others more driven by a fear of the new: the 'known' is far more comfortable.

So how are these challenges managed? There are many tools and techniques that are used across industries, and many can help understand the issues and concerns at a global level. However, the best technique is listening. Taking time to talk, and most importantly listen, to those doing the work can deliver far more insight than many workshops. A couple hours sat beside someone using the current tools, and seeing the issues they have, the frustrations they suffer, is invaluable in shaping the new solution.

We, in IT, live in a world where 'new' is exciting, and change is invigorating. To many actually doing the work, however, it can be daunting, and cause impacts on their daily tasks. It is therefore vital that we fully understand the work that our customer is doing, and ensure that our new tool is capable of meeting the true needs of the end users. My Grandmother used to say we have 2 ears and 1 mouth, and should use them in that proportion: not a bad maxim on which to base our delivery of change.

August 29, 2018

Keeping Everyone 'Appy

Mobile applications (Apps) are now a major feature of most utilities, as an effective mobile workforce is critical to efficient operation. However far too often the Apps are not intuitive, are disjointed, and focussed on serving the backend rather than those in the field. Unsurprisingly such Apps have delivered far less value to the business than originally envisaged.

So what will keep everyone happy? Firstly, the Apps should be based around the actual job function. There are many deployments where 'standard' Apps have been used based on core network applications, however that often leads to field operatives having to use several Apps to complete a task. Workflows on the Apps can also be generic, meaning the operatives have to jump between screens to complete tasks. Unsurprisingly this leads to inefficiencies, not only in time field operative time, but also in inconsistences of data between the various Apps.

Secondly good Apps (i.e. the ones we all use on a daily basis) tend to be intuitive, requiring little training. This is generally because those who developed the App have a good understanding of the end user's needs. However, in utilities it is unfortunately all too common for those writing the Apps to have very little knowledge of the needs of the field staff.

Thirdly many utilities operate in areas where there is limited connectivity, and yet too many Apps rely on connectivity to deliver their service. It is vital that the user can obtain or enter the information at the location of the work or asset. If field operatives are unable to find or enter information at that location, there can be health and safety risks, as well as inefficiencies.

Keeping everyone happy with their Apps is not difficult as long as the basics of being task based, intuitive and working offline are followed. If they can also help the field operative identify issues with assets before failure, then the benefits are even greater, however that is another story.

April 25, 2018

The Only Constant is Change - the Water Cycle

There has a move back towards catchment based working for a number of years, and this has brought many advantages, especially in regard to environmental improvements. Generally however such working tends to be sector and company based. Although there have been a few cross sector studies and solutions, these are the exception rather than the rule.

There are a number of disruptive factors, such as the European Water Framework Directive, that are increasingly moving organisations towards multi sector and company integrated catchment solutions. There are already many studies that are pin-pointing pollution, both point source and diffuse, and moving solutions towards beneficial outcomes and away from 'tick box' outputs. There are similar studies looking at drought risk. However there are very few examples where such studies are joined, let alone linked to other water related impacts, such as flooding and agricultural production.

As new tools, especially the ability to collate and use large and disparate data sources, and the rise of AI, become increasingly available and affordable, such catchment whole water cycle working will increase, and provide real benefit across sectors. To enable this however will not only require new technology, but more importantly changes in working practice. For example, sharing of data between organisations will be critical. Individuals will need to understand more about the issues and potential solutions for others affected by the water cycle in an area. Whilst the technical challenges are complex, the organisational and people aspects present even bigger challenges. We must however overcome such issues if we are to deliver truly holistic and sustainable solutions.

April 17, 2018

The Only Constant is Change - Electricity 2.0

Electric networks are facing more variable loads at the local level (down to LV), including demands, such electric vehicles and heat pumps, embedded generation, such as photovoltaic, micro-hydro and wind and more variability of population density. These localised demand peaks put stress on the system and risk, leading to phase imbalance, voltage frequency and waveform issues, increased outage (customer interruptions, network interruptions), and thermal issues.

Traditional management of the network to mitigate those risks would lead to many issues. These include wholescale network capacity upgrades i.e. lay larger cables, larger transformers, major disruption, including to traffic and customers (planned outages), and significant increases to charges. These impacts would be unacceptable to customers and other stakeholders, including those whose journeys are interrupted by street works.

In the future Distribution Network Operators will need to become Distribution System Operators (DSOs). They will use LV automation and switching to balance loads and demands, This will mean a move towards Active (or Adaptive) Network Management, to be able to minimise and optimise the need for network upgrades. As such they will manage local networks like large national Transmission networks.

To become a Distribution System Operator, a network operator will need a solid base. This includes a sound connectivity model, the ability to link/share connectivity details with modelling tools, and secure links between core asset systems (e.g. GIS/aDMS). A few orgaisations are already moving in this direction, and I am currently involved in a DSO project. Such changes will become 'the norm' over the next few years.

January 28, 2018

Managing Smart Electric Meters- Things to Consider

The utility industry has been witnessing an immense rise of smart electric meters Implementation across the globe. With the digital revolution setting in, there has been an increasing move towards enabling advanced metering infrastructure(AMI) for effectively managing meter data and operations. The ability to enhance grid reliability, effectively manage peak loads and passing the control of usage back to the end customers have all catalyzed this trend. The envisioned benefits of smart meters to the Industry are many, but for me as an asset management consultant it gets me thinking- What's in store for me?

Continue reading "Managing Smart Electric Meters- Things to Consider" »

September 26, 2017

The Only Constant is Change

Everyone lives in changing times, and the pace of change is accelerating. In Utilities however, caution is rightly placed on any change, as our societies and to a large degree civilisation are supported by sound infrastructure. Nonetheless, the way we use our infrastructure will have to change radically over the next few years. Increasing population and population densities, climate change and aging infrastructure are leading to more system failures, in terms of outages, flooding and limitations on use. It is becoming more difficult to model the impacts of this change on our infrastructure, as many of the historic 'norms' no longer apply. Our universities have many research projects to try and better understand, and hence predict, how infrastructure will be affected by change, and the best options to adopt to ensure infrastructure can meet these challenges. Undoubtedly some of the new tools being developed, especially AI coupled to effective IT/OT integration will greatly assist in this area. I am helping to organise a Future Water Association conference on 4/5 December this year that will look at how we move towards 'smart water networks'.

Over the next few years however probably the area that will see the greatest change is electricity distribution. The way we both generate and use electricity is changing at an exponential rate. Embedded generation, such as wind and solar, means that supply enters the overall grid at many diverse locations, and intermittency means that the quantity of that supply will vary greatly over days and years. More demands, such as electric vehicles and heat pumps, mean that the peaks and toughs of power required will become more intense. To manage this in 'traditional' ways would mean major upgrades to the networks, which we cannot afford, either in monetary or disruption terms. Organisations are thus moving towards 'Distribution System Operation', where local networks, including LV, will be actively managed, in a similar, but more local way to how transmission networks are managed regionally and nationally.

This is the first of a series of blogs where I will start to explore what change might mean to utilities, starting with 'Distribution System Operation'.

July 20, 2017

Utility Procurement - a New Vision

Innovation is part of the 'DNA' of Infosys, and we are always being asked to innovate by our clients. All too often however the procurement process constrains our ability to offer that innovation. The deliverables are given strict bounds, and we are only able to offer specific solutions. For example, the need may be for improvements in asset management, but the tender constrained to configuring and installing a particular software package. Whilst in a few cases that may be due to a poor procurement strategy, in most cases it is due to the constraints, both regulatory and corporate, that control how procurement can be undertaken.

Does it have to be this way? I believe that clients could procure in an innovative way, that allows their suppliers to show their ability to offer novel ways to solve problems. The process could be two stages, the first a simple pre-qualification exercise to determine a shortlist (as is currently undertaken), the second to deliver an outline design of the solution, where the client pays a small fee to the tenderers to get into far more detail than current tenders allow. This will enable the supplier to demonstrate their ability to deliver innovation, and the client to both understand that ability, and know how the supplier performs in a work situation. Such a process would enable to client to tackle much larger issues than generally covered in a tender, and indeed a few utility clients are already using a more agile approach. I will demonstrate with an example in asset management.

This example tender could be phrased "Devise a solution that will deliver an x% reduction in asset management costs, whilst producing a y% improvement in performance, without increasing overheads." In the pre-qualification, tenderers would need to demonstrate experience in such areas (although not necessarily in the same industry), and provide good and pertinent references: this would allow the client to shortlist. Tenderers could also consider partners to add to their bid, for example instrumentation suppliers and installers. In the tender, the client would allow a certain sum for each tenderer to produce their innovative solution, with sufficient access to client staff to determine constraints, both technological and business. This phase would of course need to be undertaken under non-disclosure agreements to protect all parties. Once the 'tender' is completed, the client would be able to select a supplier with a much greater understanding of that supplier's ability to innovate in a way that will benefit their business.

Whilst this system may seem strange to some in utility procurement, it is similar to those employed in areas like architecture, that have allowed buildings such as the Sydney Opera House to be developed. Do we want our future to be full of bland boxes, or Guggenheims?

March 14, 2017

The Security trap

Security in IT is very important. Unauthorised access to confidential information can cause major disruption to companies, and to individuals lives. Some disruption can have life changing impacts to finance and reputation. Even 'lesser' security issues, such as viruses, can cause massive damage to company systems. Breaches to Operational Technology (OT) systems (such as SCADA) in utilities could cause countrywide failures, and put lives at risk. IT security is therefore quite rightly taken very seriously by governments, organisations and individuals.

However IT security is just one amongst the many risks we all face on a daily basis. Even a major breach of a utility OT system would not have the impact of an atomic bomb, and yet the world managed to increase overall wealth, and made great strides to reduce poverty, throughout the Cold War, under the threat of mutually assured destruction. IT security is therefore just another risk that we all have to manage.

Unfortunately in too many organisations IT security is used as a reason not to implement technological improvements. For example, video conferencing between computers, and even mobile devices, is something many of us use regularly, however video conferencing between organisations is very rare, generally because of 'IT security' concerns. Sharing of information is frequently blocked, and yet shared information often increases knowledge and opportunity for all of the participating organisations. For example, Transport for London (TfL) made most of the information for its transport systems (e.g. timetables) publically available: there are now a plethora of 'apps' to help travellers plan their journeys, all of which have been produced at no expense to TfL, and increase customer satisfaction.

I believe it is a duty of those of us in the IT world to ensure that IT security is managed appropriately, and not used as an excuse to block the business and personal benefits that our innovative technology can bring. Like any other risk it should be managed appropriately and balanced against the benefits. We cannot let the few who would wish to take advantage of us through IT security breaches constrain our future.

March 3, 2017

The Asset Management Journey - into Adaptive

For utilities, traditionally most asset management was based on cycles of planned maintenance, interrupted by many occurrences of reactive work. The planned maintenance was generally based historic norms, often with little feedback of benefit. With the advent of asset management systems, both IT (e.g. EAM/WAM) and Process (e.g. PAS55, now ISO 55000), work became more planned, and was more based on benefit, drawing particularly on asset risk and criticality. Such changes made major improvements in efficiency, with reductions of reactive work from 70% to 30% not uncommon. However planned work was, and in many cases still is, based on expectations of asset lifecycle performance, and not on actual asset feedback. Whilst such proactive strategies reduced service impacts, it led to higher levels of planned maintenance than necessary to ensure optimum asset life.

Over the last 20 years industries have increasingly turned to predictive methodologies, using sensors and instrumentation, coupled with appropriate analytic software, to predict and prevent asset failure though understanding trends. For example, a large transmission operator uses transformer load measured against ambient and internal temperature. A band range of 'normal' internal temperature against load and ambient temperature is mapped, and the system flags when internal temperature is outside of this range, so that checks can be made before any failure. Increasingly such tools are using machine learning which further helps to predict 'normal' asset behaviour. Asset management has therefore moved from Reactive through Proactive to Predictive.

Artificial Intelligence (AI) tools, such as Infosys NIA, are now starting to be used in asset management. These new methodologies use the AI engine to collate, compare, analyse, and highlight risks and opportunities. The tools can use structured and unstructured data, static and real time, and have the ability to take data from disparate sources. The systems will increasingly refine understanding of asset behaviour based on multiple inputs, such as sensors/instrumentation, third party data (weather), social media feeds, and impacts flagged by external, but publically available, sources. The tools will then be able to advise courses of action based on current events. They could also then be used to model possible scenarios, and advise actions and impacts based on their understanding of inputs against outputs (stochastic modelling +). Such tools will enable an organisation to continuously adapt its asset management strategies and implementation to current and future events.

I call this Adaptive Asset Management.

October 14, 2015

10 key pointers for an effective Web-GIS implementation leveraging ArcGIS Server

 The following pointers came out of a couple of large Web GIS implementation experience in the Utilities domain using ArcGIS for Server version 10.2.1

1. Never try to replicate your Desktop GIS into Web
We have been using GIS as a desktop application since ages. It is a natural tendency to adopt a similar view in the web as well. Long lists of layers in the Table of Contents, plethora of tools that are seldom being used, North arrow, Measurement scale, are few things that remind us of a Desktop GIS. Build Application for targeted audience - give no more features than what the users absolutely need. Restrict them within a (work) flow so that they can navigate your app with ease. Always remember that your web GIS users are not GIS experts.

2. Map server is the key to the success!
Pay special attention while creating your map services. ESRI has made it very easy to serve your spatial data. However, serving them optimally can be very tricky - particularly if you're targeting hundreds of concurrent users. Follow some basic rule of thumbs - create multiple map services instead of one; no more than 8 to 12 layers in a single map service; try to keep symbols as simple as possible; try not to use Definition Query; follow the n+1 rule while setting up for the 'maximum number of instances per machine', n being the number of cores; allow Windows to manage page file automatically (in case of virtual memory).

3. Avail free base maps and other services from Bing, Google or ESRI
No matter how cleverly you prepare your base maps, I can assure you, they are not better than all the base maps that are available for free. Instead of concentrating on a killing base maps that you will use as a backdrop for your GIS data, use one that is free - as a bonus, you will be saving the trouble of updating them as well.

4. Choose your frontend technology carefully
Not many options are currently available for delivering a frontend API.  For a wider audience, use  JavaScript and HTML5 - unless you're developing some features that are not mature enough in this environment.

5. Keep mobile devices in mind during design
More and more people are online on mobile devices, than through their PCs. However, though the majority of these online users are mainly in the social networking sites during this time, they do see maps in their mobile devices ( Think of different screen sizes your users will be using to browse your app and plan for accommodating 'Tap's along with 'Click's.

6. Initial load time should never exceed 8 seconds
The average adult's attention span, for a page-load event, is around 8 seconds ( Today's users, with the availability of information at their fingertips (taps?), become increasingly impatient for the wait time. On opening a page, if it takes more than 8 seconds, majority of the users will 'X' it out. If you want a wider foot print for your web application restrict the initial load time to 8 seconds, quicker the better.

7. Display non-spatial data spatially
Integration is commonplace in today's GIS. Display of Non-GIS data within GIS is a norm rather than an exception. There are various ways you can integrate - try displaying data on the map as graphic texts rather than in a table within the map. Spatial distribution helps us see patterns that tabular display fails to provide.

8. Pay more attention to User Experience over User Interface
User Experience (UX) is mostly (but not completely) achieved through User Interface (UI). For example, when you provide a zoom-in feature in a mapping application, you can implement that as a command (fixed zoom-In) or a tool (for a user to draw a polygon on the map to zoom into). This is UI. However, implementing a zoom-in feature as a tool can have a different UX depending on how you have programmed the cursor for 'after zoom-in event'  -  retain it in zoom-in mode, or take it back to the default mode(which is usually a pan), when finished. For a better UX, always provide a feedback to the user for each action they perform.

9. Know your users (behind the scene!)
Knowing your users is the best thing you can do for your application. There are some products out there that can capture user statistics, map server performance, number of hits, etc. but they cannot capture an individual user's feedback. If dissatisfied while using your product, majority of the users will not complain and issues but will stop using your application. User survey is another option but they fail to give you a clear picture because of poor participation. It is always a good idea to capture the user feedback behind the scene. For example, if you have a customized 'search' button, log each of its click events.  Try to capture who is searching what and how long is it taking before they see the results. You can fine tune your application based on this log; even, give them a 'hint' on effective searching.

10. Secure your application
Security comes with a price. While Confidentiality and Integrity are achieved easily, Availability is sometimes compromised. It restricts your application to a lesser footprint. Whether to 'Share' or to 'Secure', will be dictated by the business requirements.  At the least, you should always secure your map services through token and Secure Socket Layer; make sure the Server Manager is not visible from outside of your firewall.

Continue reading "10 key pointers for an effective Web-GIS implementation leveraging ArcGIS Server" »

เว็บพนันบอล m88 sitemap Wild sbobet5555 royalonlinev2ios 12bet slot linkvaosbobet link vo m88 GDWBET