Monday, March 18, 2019

Project Portfolio Management: Theory vs. Practice

Project Portfolio Management: Theory vs. Practice
If you are responsible for managing portfolios of technology programs and projects, your success in maximizing business outcomes with finite resources is vital to your company’s future in a fast-changing and digital world.

Project portfolio management is the art and science of making decisions about investment mix, operational constraints, resource allocation, project priority and schedule. It is about understanding the strengths and weaknesses of the portfolio, predicting opportunities and threats, matching investments to objectives, and optimizing trade-offs encountered in the attempt to maximize return (i.e., outcomes over investments) at a given appetite for risk (i.e., uncertainty about return).

Most large companies have a project portfolio management process in place, and they mostly follow the traditional project portfolio management process as put on paper by PMI. This process is comprehensible and stable by nature.

Even better, it has the appearance of a marvelous mechanical system that can be followed in a plannable, stable, and reproducible manner. In the end, the project with the greatest strategic contributions always wins the battle for the valuable resources.

Unfortunately, this process does not work well in the real world, despite its apparent elegance. Ultimately, it is characterized by uncertainty, difficulties, ever-changing market environments, and, of course, people—and these do not function like machines.

When we look at technology projects, the primary goal of portfolio executives is to maximize the delivery of technology outputs within budget and schedule. This IT-centric mandate emphasizes output over outcome, and risk over return.

On top of this, the traditional IT financial framework is essentially a cost-recovery model that isn’t suitable for portfolio executives to articulate how to maximize business outcomes on technology investments.

As a result, portfolio management is marginalized to a bureaucratic overhead and a nice-to-have extension of the program and project management function.

So yes, in theory most large organizations have a project portfolio management function in place, but in practice it is far from effective.

Below are 11 key observations I have made in the last few years regarding effective project portfolio management:

1) No data and visibility.

The first theoretical benefit of effective project portfolio management concerns its ability to drive better business decisions. To make good decisions you need good data, and that’s why visibility is so crucial, both from a strategic, top-down perspective and from a tactical, bottom-up perspective.

Anything that can be measured can be improved. However, organizations don’t always do sufficient monitoring. Few organizations actually track project and portfolio performance against their own benchmarks, nor do they track dependencies.

Worse, strategic multiyear initiatives are the least likely to be tracked in a quantitative, objective manner. For smaller organizations, the absence of such a process might be understandable, but for a large organization, tracking is a must.

Not monitoring project results creates a vicious circle: If results are not tracked, then how can the portfolio management and strategic planning process have credibility? It is likely that it doesn’t, and over time, the risk is that estimates are used more as a means of making a project appear worthy of funding than as a mechanism for robust estimation of future results. Without tracking, there is no mechanism to make sure initial estimates of costs and benefits are realistic.

When you have a good handle on past project metrics, it makes it much easier to predict future factors like complexity, duration, risks, expected value, etc. And when you have a good handle on what is happening in your current project portfolio, you can find out which projects are not contributing to your strategy, are hindering other more important projects, or are not contributing enough value.

And once you have this data, don’t keep it in a silo only visible for a select group. All people involved in projects should be able to use this data for their own projects.

2) Many technology projects should not have been started at all.

Big data, blockchain, artificial intelligence, virtual reality, augmented reality, robotics, 5G, machine learning... Billions and billions are poured into projects around these technologies, and for most organizations, not much is coming out of it.

And this is not because these projects are badly managed. Quite simply, it is because they should not have been started in the first place.

I believe that one of the main reasons that many innovative technology projects are started comes down to a fear of missing out, or FOMO.

You may find the deceptively simple but powerful questions in “Stop wasting money on FOMO technology innovation projects” quite useful in testing and refining technology project proposals, clarifying the business case, building support, and ultimately persuading others why they should invest scarce resources in an idea or not.

3) Many projects should have been killed much earlier.

Knowing when to kill a project and how to kill it is important for the success of organizations, project managers and sponsors.

Not every project makes its way to the finish line, and not every project should. As a project manager or sponsor, you’re almost certain to find yourself, at some point in your career, running a project that has no chance of success, or that should never have been initiated in the first place.

The reasons why you should kill a project may vary. It could be the complexity involved, staff resource limitations, unrealistic project expectations, a naive and underdeveloped project plan, the loss of key stakeholders, higher priorities elsewhere, market changes, or some other element. Likely, it will be a combination of some or many of these possibilities.

What’s important is that you do it on time: 17 percent of IT projects fail so badly they can threaten the existence of a company (Calleam).

Keep an eye out for warning signs, ask yourself tough questions, and set aside your ego. By doing so, you can easily identify projects that need to be abandoned right away. You might find “Why killing projects is so hard (and how to do it anyway)” helpful in this process.

4) Project selection is rarely complete and neutral.

This is often because the organization’s strategy is not known, not developed, or cannot be applied to the project (see Observation 10).

But besides this there is the “principal-agent problem.” This means that your managers already know the criteria on which projects will be selected, and so they “optimize” their details accordingly. Even when these details are not “optimized,” this data is collected in an entirely incomplete and inconsistent manner.

And did you ever encounter the situation where projects were already decided on in other rooms than in the one where the decision should have been made? I sure have.

5) Organizations do far too many projects in parallel.

Traditional project portfolio management is all about value optimization and optimizing resource allocation. Both are designed in such a way that, in my opinion, it will result in the opposite. As I (and probably you too) have seen time and again, running projects in an organization at 100 percent utilization is an economic disaster.

Any small amount of unplanned work will cause delays, which will become even worse because of time spent on re-planning, and value is only created when it is delivered and not when it is planned. Hence, we should focus on delivering value as quickly as possible within our given constraints. See “Doing the right number of projects” for more details.

6) Projects are done too slowly.

Too many organizations try to save money on projects (cost efficiency) when the benefits of completing the project earlier far outweigh the potential cost savings. You might, for example, be able to complete a project with perfect resource management (all staff is busy) in 12 months for $1 million. Alternatively, you could hire some extra people and have them sitting around occasionally at a total cost of $1.5 million, but the project would be completed in only six months.

What's that six-month difference worth? Well, if the project is strategic in nature, it could be worth everything. It could mean being first to market with a new product or possessing a required capability for an upcoming bid that you don't even know about yet. It could mean impressing the heck out of some skeptical new client or being prepared for an external audit. There are many scenarios where the benefits outweigh the cost savings (see "Cost of delay" for more details).

On top of delivering the project faster, when you are done after six months instead of 12 months you can use the existing team for a different project, delivering even more benefits for your organization. So not only do you get your benefits for your original project sooner and/or longer, you will get those for your next project sooner as well because it starts earlier and is staffed with an experienced team.

An important goal of your project portfolio management strategy should be to have a high throughput. It’s vital to get projects delivered fast so you start reaping your benefits, and your organization is freed up for new projects to deliver additional benefits.

7) The right projects should have gotten more money, talent and senior management attention.

Partly as a result of observations 5 and 6, but also because of not focusing and agreeing on what the real important projects are, many of them are spread too thin.

The method of always selecting “the next project on the list, from top to bottom, until the budget runs out” does not work as a selection method for the project portfolio. The problem here is that the right resources often receive far too little consideration. Even a rough consideration according to the principle “it looks good overall” can lead to bad bottlenecks in the current year.

Unlike money, people and management attention cannot be moved and scaled at will. This means that bottlenecks quickly become determining factors and conflict with strategic priority and feasibility. In addition, external capacities are not available in the desired quantity. Also, the process of phasing in new employees creates friction, costs time, and temporarily reduces the capacity of the existing team instead of increasing it.

8) Project success is not defined nor measured.

Defining project success is actually one of the largest contributors to project success and I have written many times about it (see here, and here). When starting any project, it's essential to work actively with the organization that owns the project to define success across three levels:

i) Project delivery
ii) Product or service
iii) Business

The process of "success definition" should also cover how the different criteria will be measured (targets, measurements, time, responsible, etc.). Project success may be identified as all points within a certain range of these defined measurements. Success is not just a single point.

The hard part is identifying the criteria, importance, and boundaries of the different success areas. But only when you have done this are you able to manage and identify your projects as a success.

9) Critical assumptions are not validated.

For large or high-risk projects (what is large depends on your organization) it should be mandatory to do an assumption validation before you dive headfirst into executing the project. In this phase you should do a business case validation and/or a technical validation in the form of a proof of concept.

Even if you do this, your project isn’t guaranteed to succeed. The process of validation is just the start. But if you’ve worked through the relevant validations, you’ll be in a far better position to judge if you should stop, continue or change your project.

The goal of the validation phase is to delay the expensive and time-consuming work of projects as late as possible in the process. It’s the best way to keep yourself focused, to minimize costs and to maximize your chance of a successful project. See “No validation? No project!” for more details on this.

10) Your organization has no clear strategy.

Without having a strategy defined and communicated in your organization it is impossible to do effective project portfolio management. I like the definitions of Mintzberg and De Flander regarding this.

“Strategy is a pattern in a stream of decisions.” – Henry Mintzberg            

First, there’s the overall decision—the big choice—that guides all other decisions. To make a big choice, we need to decide who we focus on—our target client segment—and we need to decide how we offer unique value to the customers in our chosen segment. That’s basic business strategy stuff.

But by formulating it this way, it helps us to better understand the second part: the day-to-day decisions—the small choices—that get us closer to the finish line. When these small choices are in line with the big choice, you get a Mintzberg pattern. So if strategy is a decision pattern, strategy execution is enabling people to create a decision pattern. In other words:

“Strategy execution is helping people make small choices in line with a big choice.” – Jeroen De Flander

This notion requires a big shift in the way we typically think about execution. Looking at strategy execution, we should imagine a decision tree rather than an action plan. Decision patterns are at the core of successful strategy journeys, not to-do lists.

To improve the strategy implementation quality, we should shift our energy from asking people to make action plans to help them make better decisions.

11) Ideas are not captured.

Although there is clearly no shortage of ideas within organizations, most organizations unfortunately seldom capture these ideas, except in the few cases where a handful of employees are sufficiently entrepreneurial to drive their own ideas through to implementation. This can happen in spite of the organization, rather than because of it.

Organizations are effective at focusing employees on their daily tasks, roles, and responsibilities. However, they are far less effective at capturing the other output of that process: the ideas and observations that result from it. It is important to remember that these ideas can be more valuable than an employee’s routine work. Putting in an effective process for capturing ideas provides an opportunity for organizations to leverage a resource they already have, already pay for, but fail to capture the full benefit of—namely, employee creativity.

To assume that the best ideas will somehow rise to the top, without formal means to capture them in the first place, is too optimistic. Providing a simplified, streamlined process for idea submission can increase project proposals and result in a better portfolio of projects. Simplification is not about reducing the quality of ideas, but about reducing the bureaucracy associated with producing them. Simplification is not easy, as it involves defining what is really needed before further due diligence is conducted on the project. It also means making the submission process easy to follow and locate, and driving awareness of it.

Conclusion

In the digital age, an effective project portfolio management function is a strategic necessity.

The dilemma of traditional project portfolio management is in granting too little relevance to the actual feasibility at the expense of strategic weighting. In actuality, it is more important to produce a portfolio that, in its entirety, has a real chance of succeeding. It should also be regarded not in terms of a fiscal year, but ideally in much smaller time segments with constant review and the possibility of reprioritization.

Therefore, the question should no longer be “what can we get for this fixed amount of money in the upcoming year,” but rather, “what is the order of priority for us today?”

Here, the perspective moves away from an annually recurring budget process and toward a periodic social exchange of results, knowledge, and modified framework conditions. In the best-case scenario, this penetrates the entire organization, from portfolio to project to daily work.

What do you think?

Read more…

Wednesday, March 13, 2019

Case Study: The epic meltdown of TSB Bank

Case Study: The epic meltdown of TSB Bank
With clients locked out of their bank accounts, mortgage accounts vanishing, small businesses reporting that they could not pay their staff and reports of debit cards ceasing to work, the TSB Bank computer crisis of April 2018 has been one of the worst in recent memory. The bank’s CEO, Paul Pester, admitted in public that the bank was “on its knees” and that it faces a compensation bill likely to run to tens of millions of pounds.

But let’s start from the beginning. First, we’ll examine the background of what led to TSB’s ill-fated system migration. Then, we’ll look at what went wrong and how it could have been prevented.

September 2013

When TSB split from Lloyds Banking Group (LBG) in September 2013, a move forced by the EU as a condition of its taxpayer bailout in 2008, a clone of the original group’s computer system was created and rented to TSB for £100m a year.

That banking system was a combination of many old systems for TSB, BOS, Halifax, Cheltenham & Gloucester, and others that had resulted from the integration of HBOS with Lloyds as a result of the banking crisis.

Under this arrangement, LBG held all the cards. It controlled the system and offered it as a costly service to TSB when it was spun off from LBG.

March 2015

When the Spanish Banco Sabadell bought TSB for £1.7bn in March 2015, it put into motion a plan it had successfully executed in the past for several other smaller banks it had acquired: merge the bank’s IT systems with its own Proteo banking software and, in doing so, save millions in IT costs.

Sabadell was warned in 2015 that its ambitious plan was high risk and that it was likely to cost far more than the £450m Lloyds was contributing to the effort.

“It is not overly generous as a budget for that scale of migration,” John Harvie, a director of the global consultancy firm Protiviti, told the Financial Times in July 2015. But the Proteo system was designed in 2000 specifically to handle mergers such as that of TSB into the Spanish group, and Sabadell pressed ahead.

Summer 2016

By the summer of 2016, work on developing the new system was meant to be well underway and December 2017 was set as a hard-and-fast deadline for delivery.

The time period to develop the new system and migrate TSB over to it was just 18 months. TSB people were saying that Sabadell had done this many times in Spain. But tiny Spanish local banks are not sprawling LBG legacy systems.

To make matters worse, the Sabadell development team did not have full control—and therefore a full understanding—of the system they were trying to migrate client data and systems from because LBG was still the supplier.

Autumn 2017

By the autumn the system was not ready. TSB announced a delay, blaming the possibility of a UK interest rate rise—which did materialize—and the risk that the bank might leave itself unable to offer mortgage quotes over a crucial weekend.

Sabadell pushed back the switchover to April 2018 to try to get the system working. It was an expensive delay because the fees TSB had to pay to LBG to keep using the old IT system were still clocking up: CEO Pester put the bill at £70m.

April 2018

On April 23, Sabadell announced that Proteo4UK—the name given to the TSB version of the Spanish bank’s IT system—was complete, and that 5.4m clients had been “successfully” migrated over to the new system.

Josep Oliu, the chairman of Sabadell, said: “With this migration, Sabadell has proven its technological management capacity, not only in national migrations but also on an international scale.”

The team behind the development were celebrating. In a LinkedIn post that has since been removed, those involved in the migration were describing themselves as “champions,” a “hell of a team,” and were pictured raising glasses of bubbly to cheers of “TSB transfer done and dusted.”

However, only hours after the switch was flicked, systems crumpled and up to 1.9m TSB clients who use internet and mobile banking were locked out.

Twitter had a field day as clients frustrated by the inability to access their accounts or get through to the bank’s call centers started to vent their anger.

Clients reported receiving texts saying their cards had been used abroad; that they had discovered thousands of pounds in their accounts they did not have; or that mortgage accounts had vanished, multiplied or changed currency.

One bemused account holder showed his TSB banking app recording a direct debit paid to Sky Digital 81 years from now. Some saw details of other people’s accounts, and holidaymakers complained that they had been left unable to pay restaurant and hotel bills.

TSB, to clients’ fury, at first insisted the problems were only intermittent. At 3:40 a.m. on Wednesday, April 25, Pester tweeted that the system was “up and running,” only to be forced to apologize the next day and admit it was actually only running at 50 percent capacity.

On Thursday he admitted the bank was on its knees, announced that he was personally seizing control of the attempts to fix the problem from his Spanish masters, and had hired a team from IBM to do the job. Sabadell said it would probably be next week before normal service returned.

The financial ombudsman and the Financial Conduct Authority have launched investigations. The bank has been forced to cancel all overdraft fees for April and raise the interest rate it pays on its classic current account in a bid to stop disillusioned clients from taking their business elsewhere.

The software Pester had boasted about in September of being 2,500 man-years in the making, with more than 1,000 people involved, has been a client service disaster that will cost the bank millions and tarnish its reputation for years.

The basic principles of a system migration

The two main things to avoid in a system migration are an unplanned outage of the service for users and loss of data, either in the sense that unauthorized users have access to data, or in the sense that data is destroyed.

In most cases, outages cannot be justified during business hours, so migrations must typically take place within the limited timeframe of a weekend. To be sure that a migration over a weekend will run smoothly, it is normally necessary to perform one or more trial migrations in non-production environments, that is, migrations to a copy of the live system which is not used by or accessible to real users. The trial migration will expose any problems with the migration process, and these problems can be fixed without any risk of affecting the service to users.

Once the trial migration is complete, has been tested, and any problems with it have been fixed, the live migration can be attempted. For a system of any complexity, the go-live weekend must be carefully pre-planned hour by hour, ensuring that all the correct people are available and know their roles.

As part of the plan, a rollback plan should be put in place. The rollback plan is a planned, rapid way to return to the old system in case anything should go wrong during the live migration. One hopes not to have to use it because the live migration should not normally be attempted unless there has been a successful trial migration and the team is confident that all the problems have been ironed out.

On the go-live weekend, the live system is taken offline, and a period of intense, often round-the-clock, activity begins, following the previously made plan. At a certain point, while there is still time to trigger the rollback plan, a meeting will be held to decide whether to go live with the migration or not (a “go/no go” meeting).

If the migration work has gone well, and the migrated system is passing basic tests (there is no time at that point for full testing; full testing should have been done on the trial migration), the decision will be to go live. If not, the rollback plan will be triggered and the system returned to its previous state, that which was obtained before the go-live weekend.

If the task of migration is so great that it is difficult to fit it into a weekend, even with very good planning and preparation, it may be necessary to break it into phases. The data or applications are broken down into groups which are migrated separately.

This approach reduces the complexity of each group migration compared to one big one, but it also has disadvantages. If the data or applications are interdependent, it may cause performance issues or other technical problems if some are migrated while others remain, especially if the source and destination are physically far apart.

A phased migration will also normally take longer than a single large migration, which will add cost, and it will be necessary to run two data centers in parallel for an extended period, which may add further cost. In TSB’s case, it may have been possible to migrate the clients across in groups, but it is hard to be sure without knowing its systems in detail.

Testing a system migration

Migrations can be expensive because it can take a great deal of time to plan and perform the trial migration(s). With complex migrations, several trial migrations may be necessary before all the problems are ironed out. If the timing of the go-live weekend is tight, which is very likely in a complex migration, it will be necessary to stage some timed trial migrations—“dress rehearsals.” Dress rehearsals are to ensure that all the activities required for the go-live can be performed within the timeframe of a weekend.

Trial migrations should be tested. In other words, once a trial migration has been performed, the migrated system, which will be hosted in a non-production environment, should be tested. The larger and more complex the migration, the greater the requirement for testing. Testing should include functional testing, user acceptance testing and performance testing.

Functional testing of a migration is somewhat different from functional testing of a newly developed piece of software. In a migration, the code itself may be unchanged, and if so there is little value in testing code which is known to work. Instead, it is important to focus the testing on the points of change between the source environment and the target. The points of change typically include the interfaces between each application and whatever other systems it connects to.

In a migration, there is often change in interface parameters used by one system to connect to another, such as IP addresses, database connection strings, and security credentials. The normal way to test the interfaces is to exercise whatever functionality of the application uses the interfaces. Of course, if code changes are necessary as part of a migration, the affected systems should be tested as new software.

In the case of TSB, the migration involved moving client bank accounts from one banking system to another. Although both the source and target systems were mature and well-tested, they had different code bases, and it is likely that the amount of functional testing required would have approached that required for new software.

User acceptance testing is functional testing performed by users. Users know their application well and therefore have an ability to spot errors quickly, or see problems that IT professionals might miss. If users test a trial migration and express themselves satisfied, it is a good sign, but not adequate on its own because, amongst other things, a handful of user acceptance testers will not test performance.

Performance testing checks that the system will work fast enough to satisfy its requirements. In a migration the normal requirement is for there to be little or no performance degradation as a result of the migration. Performance testing is expensive because it requires a full-size simulation of the systems under test, including a full data set.

If the data is sensitive, and in TSB’s case it was, it will be necessary, at significant time and cost, to protect the data by security measures as stringent as those protecting the live data, and sometimes by anonymizing the data. In the case of TSB, the IBM inquiry into what went wrong identified insufficient performance testing as one of the problems.

What went wrong?

Where did it go wrong for TSB? The bank was attempting a very complex operation. There would have been a team of thousands drawn from internal staff, staff from IT service companies, and independent contractors. Their activities would have had to be carefully coordinated, so that they performed the complex set of tasks in the right order to the right standard. Many of them would have been rare specialists. If one such specialist is off sick, it can block the work of hundreds of others. One can imagine that, as the project approached go-live, having been delayed several times before, the trial migrations were largely successful but not perfect.

The senior TSB management would have been faced with a dilemma of whether to accept the risks of doing the live migration without complete testing in the trials, or to postpone go-live by several weeks and report to the board another slippage, and several tens of millions of pounds of further cost overrun. They gambled and lost.

How could TSB have done things differently?

Firstly, a migration should have senior management backing. TSB clearly had it, but with smaller migrations, it is not uncommon for the migration to be some way down senior managers’ priorities. This can lead to system administrators or other actors, whose reporting lines lead elsewhere from those doing the migration, frustrating key parts of the migration because their managers are not ordering them or paying them to cooperate.

Secondly, careful planning and control is essential. It hardly needs saying that it is not possible to manage a complex migration without careful planning and those managing the migration must have an appropriate level of experience and skill. In addition, however, the planning must follow a sound basic approach that includes trial migrations, testing, and rollback plans as described above. While the work is going on, close control is important. Senior management must stay close to what is happening on the ground and be able to react quickly, for example by fast-tracking authorizations, if delays or blockages occur.

Thirdly, there must be a clear policy on risk, and the policy should be stuck to. What criteria must be met for go-live? Once this has been determined, the amount of testing required can be determined. If the tests are not passed, there must be the discipline not to attempt the migration, even if it will cost much more.

Finally, in complex migrations, a phased approach should be considered.

Conclusion

In the case of TSB Bank, the problems that occurred after the live migration were either not spotted in testing, or they were spotted but the management decided to accept the risk and go live anyway. If they were not spotted, it would indicate that testing was not comprehensive enough—IBM specifically pointed to insufficient performance testing. That could be due to a lack of experience among the key managers. If the problems were spotted in testing, it implies weak go-live criteria and/or an inappropriate risk policy. IBM also implied that TSB should have performed a phased migration.

It may be that the public will never fully know what caused TSB’s migration to go wrong, but it sounds like insufficient planning and testing were major factors. Sensitive client data was put at risk, and clients suffered long unplanned outages, resulting in CEO Paul Pester being summoned to the Treasury select committee and the Financial Conduct Authority launching an investigation into the bank. Ultimately Pester lost his job.

When migrating IT systems in the financial sector, cutting corners is dangerous. Ultimately, TSB’s case goes to show that the consequences can be dire. For success one needs to follow some basic principles, use the right people, and be prepared to allocate sufficient time and money to planning and testing. Only then can it be ensured a successful system migration will take place.

Read more…

Sunday, March 10, 2019

10 Questions to ask before signing your cloud computing contract

10 Questions to Ask Before Signing Your Cloud Computing Contract
As pointed out in a previous article on cloud computing project management two things that have changed a lot with the rise of cloud usage are vendor relationships and contracts.

Contracts for cloud computing are rather inflexible by nature. In a cloud computing arrangement, what's negotiable and what's not? Cloud computing may be highly virtualized and digitized, but it is still based on a relationship between two parties consisting of human beings.

Below you will find 10 questions you should have answered before you sign your cloud computing contract. In my experience, these are also the biggest discussion points between a cloud provider and you as a cloud customer when negotiating such a contract.

1) How can you exit if needed? 

The very first question you should ask is, how do you get out when you need to? Exit strategies need to be carefully thought out before committing to a cloud engagement.

Vendor lock-in typically results from long-term initial contracts. Some providers want early termination fees (which may be huge) if customers terminate a fixed-term contract earlier for convenience, as recovery of fixed setup costs were designed to be spread over the term.

Often, contracts require "notice of non-renewal within a set period before expiry," causing customers to miss the window to exit the arrangement. Such onerous automatic renewal provisions can be negotiated out up front.

One other very important aspect of your exit strategy is the next question.

2) Who maintains your data for legal or compliance purposes, and what happens to it when contracts are terminated?

I have not seen a lot of negotiation yet around data retention for legally required purposes, such as litigation e-discovery or preservation as evidence upon law enforcement request. I think this issue will become more important in the future. One area that is being negotiated with increasing urgency is the ability to have your data returned upon contract termination. There are several aspects here: data format, what assistance (if any) providers will give users, what (if anything) providers charge for such assistance, and data retention period.

Another question that comes up is how long after termination users have to recover data before deletion. Many providers delete all data immediately or after a short period (often 30 days), but some users obtain longer grace periods, for example two months, perhaps requiring notice to users before deletion.

3) Who is liable for your damages from interruptions in service? 

For the most part, cloud providers refuse to accept liability for service interruption issues. Providers state liability is non-negotiable, and “everyone else accepts it.” Even large organizations have difficulty getting providers to accept any monetary liability. This can be a deal-breaker.

4) What about service level agreements (SLAs)? 

Service level agreements are another important piece of a cloud contract, and come in many flavors, since standards are lacking in this area. SLAs are often highly negotiable, as they can be adjusted through pricing—the more you pay, the better performance you are guaranteed. If SLAs are not kept, payments in the form of a service credit is normal. But how much?

5) Does availability extend to your data? 

Cloud providers tend to emphasize how redundant and fault-tolerant their clouds are, but cloud customers still need to do their due diligence. Like fire insurance for an apartment, the provider will rebuild the structure but not compensate the renter for the damaged contents. While some providers will undertake to make the necessary number of backups, most will not take steps to ensure data integrity, or accept liability for data loss.

6) What about the privacy and residency of your data?

GDPR is an important piece of data privacy legislation that regulates how data on EU citizens needs to be secured and protected. GDPR prohibits storing of data outside the boundaries of the EU without additional measures.

With the European Court of Justice’s ruling in 2015 that the Safe Harbor framework is inadequate to protect the privacy rights of EU citizens when their data is processed in the United States, it’s important to check if your U.S. provider is a member of the Privacy Shield Framework.

Some providers will not disclose data center locations. Verifying that data are actually residing and processed in the data centers claimed by providers is technically difficult.

7) What happens when your provider decides to change their service?

Many standard terms allow providers to change certain or all contract terms unilaterally. Enterprise cloud providers are more likely to negotiate these provisions up front, as are infrastructure providers. But for the bulk of businesses using more commoditized Software as a Service (SaaS) applications, you might have to accept providers’ rights to change features.

Customers are able to negotiate advance notifications of changes to Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) engagements; however, as these reach deeper into your organizational systems, these changes could result in you having to rewrite application code created to integrate with proprietary provider application programming interfaces.

8) How do you manage your intellectual property rights? 

Intellectual property rights issues are a frequently debated issue. Providers’ terms may specify they own deliverables, for example, documentation. However, the line is sometimes unclear between a customer’s application and the provider’s platform and integration tools. Where integrators develop applications for their own customers, customers might require intellectual property rights ownership, or at least rights to use the software free after contract termination or transfer.

Another issue of contention concerns ownership rights to service improvements arising from customer suggestions or bug fixes. Providers may require customers to assign such rights. Yet customers may not want their suggested improvements to be made available to competitors.

9) What are the reasons for your service termination?

Non-payment is the leading reason providers terminate contracts with customers, but there are many other issues that crop up, which may or may not be the customer's fault. Other reasons providers pull their services include material breach, breach of acceptable use policies, or upon receiving third-party complaints regarding breach of their intellectual property rights.

The main issue is that the actions of one user of a customer may trigger rights to terminate the whole service. However, many services lack granularity. For instance, an IaaS provider may not be able to locate and terminate the offending VM instance, and therefore needs to terminate the entire service.

Providers, while acknowledging this deficiency, still refuse to change terms, but state they would take a commercial approach to discussions should issues arise.

10) When was your provider’s last independent audit?

Most cloud providers boast their compliance with the regulatory scheme du jour. But any cloud customer—especially one working in a highly regulated industry—should ask a provider: "How long ago was your last independent audit against the latest [relevant] regulatory protocols?"

Even for cloud customers that don't operate within a highly regulated sector, it might be a plus to know that a selected provider can pass a stringent regulatory audit.

Conclusion

When cloud customers seek to negotiate important data security and data privacy provisions, a common response from cloud providers is that the terms and conditions with which the customer has been presented is a "standard contract"—implying that it is, as such, non-negotiable.

A good counter-response is: "I understand—and these are my standard amendments to the standard contract."

Try asking a cloud provider if they have ever added, waived, or modified a contentious provision for other customers. See how they respond.

An organization's data represents its crown jewels. As such, no cloud customer should just lie down for a disadvantageous, and potentially harmful, cloud contract.

A cloud contract is just that: a contract. As such, it carries with it all of the normal pitfalls of a contractual relationship—and a few specialized ones. By asking the right questions, you’ll ensure your rights are protected.

Read more…

Wednesday, February 27, 2019

I bet your large technology project will be late and costs way more than you expected

I bet your large technology project will be late and costs way more than you expected
Yes, I am willing to make that bet without knowing anything about your project, your team, your organization, your timelines, or your budget. Here is why:

Activities within technology projects are usually so complex that if you were to repeat them under identical conditions, the time required to complete them would vary. After all, people are involved in performing these activities, and as you may know, the behavior of people is non-deterministic.

When you estimate or guess the duration of an activity, you're really dealing with a range of possible outcomes. If plotted as a distribution, the actual results for a number of identical trials would have a shape such as that shown in the figure below. There's a minimum duration, a most likely duration, and a maximum duration.


The shape of this distribution varies with the nature of the activity. For example, if you’ve done the activity many times and everything involved in it is fairly predictable, the minimum and maximum duration are close together. Though the distribution is probably very narrow.

If the activity has never been tried before and some aspects of the execution are poorly understood, the minimum duration and maximum duration might be more widely separated. Our time needed for the activity could be quite variable.

For example, setting up a GitHub repository is an activity that's well understood (at least by most developers). If you measure the time required by experienced professionals to setup such a repository, you would probably arrive at a fairly narrow distribution. On the other hand, building your own source code repository is an activity that is not well understood. The distribution of durations for that activity are more likely to be fairly broad.

Distributions for the duration of less predictable activities have another significant feature. Although the minimum duration is probably well below the most likely duration, the time between them is usually far less than the time between the most likely duration and the maximum duration. The distributions are likely to have shapes like the one below.


These long-tailed distributions are very normal. While there are some pretty hard limits on how fast things can be done, there are no limits on how slow things can be done. For example, no matter how hard you try, you probably cannot get your Windows laptop booted and working with a projector in less than 2 minutes; even if you pray. But even without trying, it can sometimes take you 20 minutes, even though it's only a small security update that is installing.

If you're prudent, you leave a little extra time to get yourself ready for presenting in that important meeting, and even then, you're just screwed.

This is just a small example. Now imagine the distribution of times required to integrate and test a complex API. If all goes well, it might take four weeks; rarely less. But if things don't go well, it could take months.

Why am I so sure it rarely takes less? I take into account Parkinson's law.

Work expands so as to fill the time available for its completion.

So why are projects always late?

The answer is that the estimates most of us make for the duration of an activity are of the "most likely" type. That is, we tend to use as an estimated duration for the most likely value of the duration. Since the most likely value of the duration is usually less than the mean duration, for most distributions in real life, we tend to bias our estimates towards the short end of the distribution, as illustrated below.


For a single activity, this isn't good, but it isn't a disaster. True, on the average, your performance would be less than stellar. Depending upon the exact shape of the distribution, you would find that typically, you would be underestimating the actual duration by about 10-20%.

The problem becomes much more severe when you look at projects in which the project duration is the result of several such underestimates, in a series, as would be the case along the critical path of a complex technology project. If you're significantly late on some activities, no amount of early delivery on other activities can compensate for it.

The Planning Fallacy

What I described above is one possible explanation for what is also known as the planning fallacy. First proposed by Daniel Kahneman and Amos Tversky in 1979, the planning fallacy is a phenomenon in which predictions about how much time will be needed to complete a future activity displays an optimistic bias and underestimates the time required.

This phenomenon sometimes occurs regardless of the individual's knowledge that past activities of a similar nature have taken longer to complete than generally planned.

Or as Hofstadter 's Law formulates it:

It always takes longer than you expect, even when you take into account Hofstadter's Law.

The bias only affects predictions about one's own activities. When outside observers predict activity completion times, they show a pessimistic bias, overestimating the time needed.

In 2003, Lovallo and Kahneman proposed an expanded definition as the tendency to underestimate the time, costs, and risks of future actions; while at the same time, overestimate the benefits of the same actions.

According to this definition, the planning fallacy results in not only time overruns, but also cost overruns and benefit shortfalls.

Conclusion

The above is why I will make the bet blindly; statistics seem to support my bet. According to multiple sources (KPMG Project Management Survey 2017, Standish Group Chaos Report 2015) more than 20% of large technology projects fail outright and another 50% are over time and budget. Also, the failure rate of projects with budgets over $1M is 50 percent higher than the failure rate of projects with budgets below $350,000 (Gartner).

Funny enough, almost 100% of all project managers and sponsors believe their project belongs to the other 25%.

In one of my next articles I will discuss some measures you can take to help mitigate the planning fallacy and make better estimations.

Read more…

Tuesday, February 19, 2019

How cloud computing is changing project management

How cloud computing is changing project management
Many organizations have started their cloud transition, but cloud computing is still new enough that project management practices have yet to catch up.

There aren’t a lot of resources available about managing cloud (transformation) projects, which is rather strange, because the way cloud computing works has a major impact on the skills a project manager needs to be successful in delivering such projects.

One of the first hurdles is recognizing that cloud computing is not a single technology, product, or design – it really is a new approach to IT and doing business. I am not a big fan of bullshit bingo, but this you really can call a paradigm shift.

Some projects configure cloud services as a private cloud, while others use pre-defined public SaaS (Software as a Service), PaaS (Platform as a Service) or IaaS (Infrastructure as a Service) offerings. Of course, you can combine these in what are known as hybrid clouds, or join forces with other organizations in creating a community cloud.

Examples of typical cloud projects include:

>
Converting to an external hosted business application, such as Salesforce;

Implementing Microsoft Office 365 in the cloud;

Creating a cloud-based server/storage infrastructure as a standard resource for corporate users;

Establishing a standard program testing platform and deploying a set of development tools for cloud application development (i.e., creating a PaaS environment);

Developing a new cloud-based enterprise application using an existing PaaS environment;

Implementing a cloud-based data backup or disaster recovery system; and

Acquiring a cloud-based security management system.

This article will describe what are, in my opinion, the 10 most important things you need to be aware of when managing cloud projects compared to on-premises projects.

Security & Compliance

You need to understand that a migration to the cloud will often completely disrupt an organization's existing security and governance strategy. Governance methods that worked for traditional on-premises systems probably won't work for the cloud.

As organizations move data to the public cloud, their control decreases and more responsibility falls on the shoulders of the cloud providers. Therefore, organizations must shape their security governance strategies to rely less on internal security and control, and more on their cloud provider's offerings.

Since security is never 100 percent perfect, it's important for you to plan ahead for potential breaches, failover and disaster recovery.

And of course, these additional security tools and services will increase overall project and operational costs.

Data Privacy & Residency

You need to be aware that there are laws in specific states, countries, or governmental associations such as the European Union (EU) that dictate that sensitive or private information may not leave the physical boundaries of the country or region (residency), and that the information should not be exposed to unauthorized parties (privacy).

Example legislation includes:

> The United Kingdom Data Protection Law

> The Swiss Federal Act on Data Protection

> Russian Data Privacy Law

> The Canadian Personal Information Protection and Electronic Documents Act (PIPEDA)

The EU Data Protection Directive is also an important piece of data privacy legislation that regulates how data on EU citizens needs to be secured and protected. With the European Court of Justice’s ruling in 2015 that the Safe Harbor framework is inadequate to protect the privacy rights of EU citizens when their data is processed in the United States, data privacy professionals expect to see additional data privacy legislation and restrictions appear across Europe.

Besides these general data protection laws, there are also industry-specific compliance requirements that can affect your project. Examples of such requirements include:

> The Health Information Portability and Accountability Act (HIPAA)

> Swiss Banking Secrecy

> The Health Information Technology for Economic and Clinical Health (HITECH) Act

> The Payment Card Industry Data Security Standards (PCI DSS)

And then there are third-party obligations: Agreements among business partners that outline how a party such as a contractor or vendor will handle and treat private or sensitive data belonging to another organization.

Such agreements often hold the external party accountable for securing the data in the same fashion as the owner of the data, including adherence to all residency, privacy, and compliance requirements.

For example, a contracted agency performing billing for a hospital in the U.S. must observe all the data protection requirements mandated by HIPAA and HIT.

Vendor Relationships & Contract Negotiation

While project managers have always needed to have contract negotiation skills, the move to cloud requires you to employ vendor relationship and contract negotiation skills much more often.

There is an aspect of additional overhead to this because the development of even a small application would necessitate working with the vendor to iron things out.

Cloud and SaaS vendors need to stay in business, and are not likely to cut customers slack in many areas. Buyers need to be prepared to assert their organizations' best interests on the questions of service interruptions, service-level agreements, data availability and physical location, and intellectual-property rights.

Service Level Agreements

Cloud computing involves a division of responsibilities between users and providers that needs to be based on well-defined agreements, usually called Service Level Agreements (SLAs).

Every cloud services provider has an SLA that specifies what is being provided, how well it should work, what remedies are available if it fails, and how much it costs. For example, Microsoft has its Azure SLA(s) and Amazon has its EC2 SLA.

Many project managers may not have had much prior experience with this type of agreement. You will when you are involved in a cloud computing project.
SLA concepts can be applied at three different interface points:

> End User/Solution Owner – an SLA between parties within the enterprise that specifies what the IT Department provides to its business customers;

> Solution Owner/Internal Provider (or a Broker) – an Operational Level Agreement (OLA) that codifies service agreements between departments or with a cloud broker; and

> Internal Provider/External Provider – an SLA between the organization and a cloud service provider.

Team Size & Skills

The size of local project teams has greatly reduced and the skillsets of those who need to stay onsite have changed. This means effective coordination and communication between location- and organization-dispersed teams becomes even more important than it was the last few years.

There are no internal team members involved in the design and architecture piece. You only interact with designers and architects from the vendor side remotely, with them coming onsite for meetings as needed.

The coordination overhead increases as you still have to take care of oversight responsibilities ranging from estimation through testing, but with external vendor personnel.

Financial Literacy

You will be required to deal with environments that are going to be a mix of applications hosted on onsite servers and those hosted at cloud sites. When a new application is to be developed, you need to perform cost and ROI analysis for both options. This requires knowledge of cost for cloud-based environments and expertise at creating both a detailed project budget and an operational budget.

This can be challenging, because a disadvantage of running things in the cloud is that costs can be wildly unpredictable. Public cloud providers, with the exception of most SaaS providers, are not known for using simple billing models. Typically, you are billed based on the resources you consume. This includes storage resources, but also CPUs, memory, and storage I/O. Resource consumption may be billed differently at different times of the day, and not all activity is treated equally.

Technical Proficiency

You don’t have to be an engineer or solutions architect to run a cloud-based project. That said, the more technical knowledge you have, the better. At the very least, it helps to know the differences between VMware, Azure, Google Cloud Platform and AWS, and how your company or vendor deals with these differences. After all, as the unified cloud becomes more of a reality, knowledge of gaps and bridges will only enhance your project skills and contribution to the project.

Due to the fact that the architectural landscape for applications gets more complicated after the move to the cloud, a deeper knowledge of the organization’s enterprise architecture comes in handy in order to ensure that newer applications get developed with the correct business and technical requirements in a manner that they work seamlessly with the existing applications hosted in the cloud and onsite.

Risk Management

The use of external providers, or a hybrid of internal and external services, can lead to additional business, technical, and project risks. Reputation and breaches of providers impact you more than ever. Sensationalized stories about data loss in the cloud, and publicized security breaches can make it difficult to gain support for cloud systems, especially public clouds; you as a project manager will spend a lot of time allaying fears, proving the solution, and generally providing answers to stakeholder questions.

Exit Strategies

Exit strategies need to be carefully thought out before committing to a cloud engagement. Vendor lock-in typically results from long-term initial contracts. Some providers want early termination fees (which may be huge) if organizations terminate a fixed-term contract earlier for convenience.

Often, contracts require "notice of non-renewal within a set period before expiry,” causing users to miss the window to exit the arrangement. Such automatic renewal provisions can be negotiated out up front.

Another way to avoid lock-in is to actually use several providers, to avoid over-reliance on one provider’s service and its (possibly proprietary) application programming interfaces.

Cloud-to-Cloud Migrations

Cloud migrations aren't just a transition from on-premises technology to the cloud; you can also migrate data from one cloud to another. These cloud-to-cloud migrations include moves from one provider to another, as well as migrations between private and public clouds. However, the migration process from private clouds to public clouds can be difficult.

While third-party tools are available to help, there is no comprehensive tool to handle the entire migration process. Cloud-to-cloud migrations involve considerable manual labor. To prepare for migration from one provider to another, organizations need to test their applications and make all necessary configurations for virtual machines, networks, operating systems and more.

Conclusion

For most project managers, managing cloud computing projects means entering unfamiliar territory. As you can see, the things a project manager needs to be aware of in order to be effective are different for cloud computing projects than for traditional on-prem projects. When considering a move to the cloud, you will need to skill up and learn about contracts, SLAs, laws, technology, and more.

This was the first article in a series about managing cloud projects.

Read more…

Tuesday, February 12, 2019

Stop wasting money on FOMO technology innovation projects

Stop wasting money on FOMO technology innovation projects
Big data, blockchain, artificial intelligence, virtual reality, augmented reality, robotics, 5G, machine learning... Billions and billions are poured into projects around these technologies, and for most organizations, not much is coming out of it.

And this is not because these projects are badly managed. Quite simply, it is because they should not have been started in the first place.

I believe that one of the main reasons that many innovative technology projects are started comes down to a fear of missing out, or FOMO.

FOMO is the pervasive apprehension that others might be having rewarding experiences that you are not. This social anxiety is characterized by a desire to stay continually connected with what others are doing.

FOMO can also be described as a fear of regret, which may lead compulsive concerned about the possibility of missing an opportunity for social interaction, a novel experience, a profitable investment, or another satisfying event.

In other words, FOMO perpetuates the fear of making wrong decisions on how you spend time and money, and it’s all due to your imagination running wild.

This fear is not limited to individuals. Organizations are victims of FOMO as well. And you will find that fear prominently on display in the many technology innovation projects that are started.

So before you start your next technology innovation project, please ask yourself the following fourteen questions. And if you’re not happy with the answers, don’t start spending time and money just because you fear missing out.

1) Why do anything at all?

First, be sure the project lies clearly in the direction your organization is heading.

It’s important to fully understand your organization’s latest strategies, priorities and targets. This helps prevent fundamental errors early on. Strategic thinking could even reveal larger opportunities than you first considered. After all, a complex technology project requires your very best people and a great amount of their focus.

2) Why do this exactly?

There are three basic ways to create value: earn more, spend less, or do things more efficiently. Decide what you’ll focus on and be able to explain why the project will do something meaningful towards that.

What customer do you want to serve with this solution? Will it involve selling more to existing customers, or pitching to entirely new ones? What job do you want to help them do better? Is the problem even big enough?

Clayton Christensen, the famed Harvard Business School professor known for coining the term “disruptive innovation,” believes that one of his most enduring legacies will be an idea he first put forward in his 2003 book "The Innovator’s Solution": don’t sell products and services to customers, but rather try to help people address their jobs to be done.

What if the benefits of the project are less tangible at this stage? Proceeding in order to gain market and product knowledge, develop new capabilities, find new partners and test possible models is perfectly valid. The challenge then is to articulate the benefits effectively.

3) What does success look like?

Before you start a project it's essential to work actively with the organization that owns it to define success across three levels:

i) Project delivery success is about defining the criteria by which the process of delivering the project is successful.

Essentially, this addresses the classic triangle of "scope, time, budget." It is limited to the duration of the project, and success can be measured as soon as the project is officially completed (with intermediary measures being taken, of course, as part of project control processes).

ii) Product or service success is about defining the criteria by which the product or service delivered is deemed successful.

For example, the system is used by all users in scope, uptime is 99.99 percent, customer satisfaction has increased by 25 percent, operational costs have decreased by 15 percent, and so on.

These criteria need to be measured once the product/service is implemented and over a defined period of time. This means it cannot be measured immediately at the end of the project itself.

iii) Business success is about defining the criteria by which the product or service delivered brings value to the overall organization, and how it contributes financially and/or strategically to the business.

For example, financial value contribution (increased turnover, profit, etc.) or competitive advantage (market share won, technology advantage).

Once these possible benefits are projected, examine whether those outcomes are realistic.

4) Will people pay for it?

This seems quite self-explanatory, but the amount of products that large organizations build for which there is little eventual market appetite is not to be snickered at. You want to build prototypes to not only validate the product/solution fit but also the revenue/pricing model.

5) Will it cost less to deliver than people are willing to pay for it?

What's the cost of delivery per customer and customer acquisition cost? You need to understand this and compare it to the lifetime value generated per customer. If each customer is worth approximately $1 million but it costs us $1.1 million to deliver said product, then you're operating at a loss and need to rethink your business model.

6) Is there already an effective but ‘less innovative’ solution to your problem?

Many of the problems we try to solve in technology already have solutions that we know work. For example:

> WORM storage vs. Blockchain when it comes to immutability
> Simple scripts vs. expensive RPA solutions

These solutions are not new and we have plenty of data to show that they are effective in solving the problem that they’re trying to address.

If the problem that you’re looking to “solve” already has proven and effective solutions, then maybe you don’t need an innovative new idea. You just need more funding for boring solutions that actually work.

7) Has it been done before? 

It's important to look at already available analogs (other products that validate market appetite) and antilogs (products that invalidate market appetite).

For example, analogs for the iPod were the Sony Walkman (these validated mobile music consumption) and MP3 players (these validated that people would download MP3s to external devices). Likewise, you want to identify failures and understand why they failed and whether or not this learning presents an opportunity.

Be aware that other organizations may have already attempted the innovation that you are advocating, and failed. One of the problems we face in development is that not all failures are reported, making it hard to learn from all that has come before us.

8) Are you trying to solve the underlying problem using only technology?

Technology can do many things to an existing process. For example, it can:

> Make the process faster
> Make the process more reliable
> Store lots of data
> Make the process more interactive
> Allow people involved to communicate more easily.

However, technology alone cannot solve an underlying problem. Taking a bad system and replicating that bad system with better technology won’t necessarily lead to improvements.

In my experience, the most effective innovation projects are ones that make incremental improvements to an existing process or system using technology (e.g., making the process faster, more reliable, etc.).

A fundamental requirement is that people are already using the existing system. If they don’t use the existing system, then they probably won’t use the new one.

9) Can you test it relatively quickly, economically and effectively using your existing networks and ability to prototype?

If you can't, then you won't be able to move quickly enough and may over-commit time and money to something that there may be little appetite for. However, today all it takes is a little imagination to build prototypes for even the most ambitious technical endeavors. The first prototype for Google Glass was built in just one day.

10) Is this scalable?

If you are successful in finding product market fit and validating your business model, what will it take to scale up by 100x, 1,000, or 10,000? Can you scale, or do you need funding and partners? How would you go about that? Would scale affect your business model and what makes your company tick? What would be the cost per unit if you scale? Will your business model still make sense?

11) Why not wait?

Why act now? You need to be able to explain why failure to act now will threaten the organization.

There is often less risk in not being first to market. Competitors can react quickly and effectively, simply learning your methods and replicating your gains without making large investments themselves.

The net result then could be restricted to temporary market share gains and, potentially, lower long-term industry prices. Even if you’re seeking to respond to a new functionality launched by a competitor, be sure of why you can’t wait to see their market reception before initiating action.

If the additional revenue gains are not large enough to win sufficient internal support over other opportunities, you must be able to point to other reasons that compel action now.

12) Who could or should do this internally?

Often, the source of innovation is not where the execution ability sits.

Identify which internal departments need to be involved. If the project proceeds, they will hear about it anyhow. It’s better to learn from their perspective and insight early on.

This is also important if you need both central corporate and local divisional sponsors, and it is unclear where the full cost should appear.

13) Who externally could do this better?

Determine if you have the necessary skill-sets internally to achieve the optimum outcome.

It is expensive and risky to create an entire new platform yourself unless the opportunity is large enough. Can you outsource some of the components involved? Instead of developing new systems, could licensing or working with a third-party vendor improve speed, flexibility, and scalability? Maybe you could buy a startup in the space you are looking for.

14) What else could I do instead?

In a competitive trading environment with constant pressure to innovate, the project generation, review, and approval process often only involves a few individuals. This may help with focus, but it also removes valuable alternative perspectives.

Project sponsors should actively seek alternative views and ensure relevant experts are consulted. It is much more persuasive if project sponsors show they have thoroughly considered alternative, organic growth strategies.

Conclusion

Is the why greater than the how?

Ideally, organizations should only do as many things as they can do well.

You should, of course, ensure that you are exploring all opportunities that create value for the organization. However, once a growth area is identified, the key to making the right decision depends on many variables and estimates, as well as the judgments of senior executives.

You may find these deceptively simple but powerful questions quite useful in testing and refining technology project proposals, clarifying the business case, building support, and ultimately persuading others why they should invest scarce resources in an idea or not.

Read more…

Thursday, February 07, 2019

Power, politics, and getting sh!t done as a project manager

Power, politics, and getting sh!t done as a project manager
Leadership and management are ultimately about being able to get things done. Your skills and qualities as a project manager help you to achieve the project goals and objectives. One of the most effective ways to do this is through the use of power. Along with influence, negotiation, and autonomy, power is one of the key elements of politics.

Power and politics are probably the most important topics in project management, but at the same time, they’re one of the least discussed subjects. They are neither “good” nor “bad,” “positive” nor “negative” alone. Each organization works differently, and the better you understand how the organization works, the more likely it is that you will be successful.

Politics has a bit of a dirty name. It’s associated with false promises, backstabbing, alliances and manipulating others. But the worst weakness of politics is its failure to deliver on its promises. Time and time again we see public politicians or business leaders failing to deliver the change they promise. And we as project managers do as well.

Power, in the engineering sense, is defined as the ability to do work. In the social sense, power is the ability to get others to do the work (or actions) you want regardless of their desires.

When we think of all the project managers who have responsibility without authority, who must elicit support by influence and not by command authority, then we can see why power is one of the most important topics in project management.

Power can originate from the individual or from the organization. Power is often supported by other people’s perception of the leader. It is essential for you to be aware of your relationships with other people, as relationships enable you to get things done on the project.

There are numerous forms of power at the disposal of project managers, but using them can be complex given their nature and the various factors at play in a project. Some forms of power are:

> Positional (sometimes called formal, authoritative, legitimate; e.g., formal position granted in the organization or team);

> Informational (e.g., control of gathering or distribution);

> Referent (e.g., respect or admiration others hold for the individual, credibility gained);

> Situational (e.g., gained due to unique situation such as a specific crisis)

> Personal or charismatic (e.g., charm, attraction);

> Relational (e.g., participates in networking, connections and alliances);

> Expert (e.g., skill, information possessed, experience, training, education, certification);

> Reward-oriented (e.g., ability to give praise, money, or other desired items);

> Punitive or coercive (e.g., ability to invoke discipline or negative consequences);

> Ingratiating (e.g., application of flattery or other common ground to win favor or cooperation);

> Pressure-based (e.g., limiting freedom of choice or movement for the purpose of gaining compliance to desired action);

> Guilt-based (e.g., imposition of obligation or sense of duty);

> Persuasive (e.g., ability to provide arguments that move people to a desired course of action); and

> Avoiding (e.g., refusing to participate)

Effective project managers work to understand the politics inside their organization, and are proactive and intentional when it comes to power. These project managers will work to acquire the power and authority they need within the boundaries of the organization’s policies, protocols, and procedures rather than wait for it to be granted, or not given at all.

Read more…