Kubernetes – Creating Another Legacy Environment?

Kubernetes, the open-source container orchestration system, automates deploying and scaling container-based applications. However, its complexity worries tech execs, who fear it may become an expensive, difficult-to-manage legacy environment with security risks. So, what do tech execs need to know about Kubernetes and its impact on their organizations?

First and foremost, it’s important for tech execs to understand that Kubernetes is not just another buzzword in the tech industry. It is a powerful tool that has gained immense popularity due to its ability to simplify and streamline container management. With containers becoming increasingly popular for application deployment, Kubernetes offers a centralized platform for managing these containers and their associated resources.

One of the key benefits of using Kubernetes is its scalability. It allows businesses to easily scale their applications up or down depending on demand without any disruption or downtime. This can significantly reduce infrastructure costs and improve overall efficiency.

However, with this increased flexibility comes potential challenges as well. The complexity of managing a large number of containers and resources can be overwhelming, leading to potential security vulnerabilities. This is why it is crucial for businesses to have a solid understanding of Kubernetes and its best practices.

Let’s explore factors that could lead to challenges with Kubernetes and how to avoid them.

  1. Complexity – The complexity of Kubernetes may lead to excessive layers of abstraction. This can make understanding each layer challenging for developers, resulting in fragmented deployment approaches and inconsistency across the organization. To address this, executives should prioritize comprehensive training and onboarding for stakeholders to foster shared understanding and best practices.

  2. Accessibility – Kubernetes empowers developers, but it also brings governance and control challenges. Access management and guidelines are crucial to prevent issues and maintain a well-managed environment.

  3. Compatibility – One of the significant concerns with legacy environments is the cost of updating and migrating applications. Similarly, the cost of updating and migrating applications in Kubernetes can be complex and expensive. Companies need to ensure that their applications continue to work as they upgrade their Kubernetes operating systems and carry out other version management. To prevent this issue, companies must conduct intensive testing before migrating from older versions to newer ones.

  4. Security – Kubernetes offers many security features and can be integrated with other tools to enhance security. However, improper configuration during deployments can diminish these security features. Configuration errors, like granting too many privileges to a service account, could result in a potential breach of security. To prevent this problem, tech execs should ensure companies have implemented the correct security policies and ensure they follow a sound configuration management process.

  5. Abstraction changes – Kubernetes abstracts a lot of what happens under the hood from its users, making it easy to deploy container-based applications. However, overemphasis of common functionalities abstracted by Kubernetes may lead to a loss of granular insight into how a specific application is run on any given node or cluster. To prevent this problem, tech execs should ensure that monitoring and logging services are in place. These services can allow teams to assess and track performance, view dependencies, and address any discrepancies that arise concerning the abstraction of Kubernetes.

In conclusion, Kubernetes offers an organizational opportunity with automation, faster deployment, and improved scalability. However, be cautious of legacy complexities, security issues, and unmanageable environments. Establish guidelines, enable the right personnel, and implement proper governance for safe adoption and full advantage of Kubernetes.

Click here for a post on managing cost with Kubernetes and FinOps.

Keep the Data Center or Move to the Cloud?

Data centers have long been crucial for storing data and running applications. But as cloud computing gains popularity, businesses must decide whether to stick with data centers or migrate to the cloud. This choice is especially vital for tech execs balancing cost, security, and scalability. So, what are the key factors to consider when deciding between data centers and the cloud?

Firstly, let’s define these two options. Data centers are physical facilities that hold servers and networking equipment for storing and processing data. They can be owned by a company or leased from a third party. On the other hand, the cloud refers to remote servers accessed over the internet for storing and managing data, running applications, and delivering services.

So, let’s explore data centers vs. cloud computing pros and cons to guide your company’s choice.

  1. Cost – When it comes to cost, data centers and cloud computing can vary widely. Data centers require a significant upfront investment in hardware, software, and maintenance, while cloud providers offer a pay-as-you-go model that can be more cost-effective for smaller businesses. However, as your company grows and your cloud usage increases, you may find that the costs of cloud computing can quickly escalate. Additionally, many cloud providers charge additional fees for add-on services, storage, and data transfer, which can make it difficult to predict your long-term costs. Before making a decision, do a cost analysis of both options, and factor in your company’s growth plans.

  2. Security – Security is a major concern for any company that stores sensitive data. Data center security can be more easily controlled with in-house staff and equipment, while cloud providers have a team of dedicated security professionals monitoring their infrastructure. However, cloud providers are also a more attractive target for cybercriminals and can be vulnerable to data breaches. When choosing a cloud provider, be sure to research their security measures, certifications and compliance standards. It’s also important to note that cloud providers may not be able to guarantee the same level of security as an in-house data center.

  3. Scalability – One of the key benefits of cloud computing is its scalability. It allows companies to easily scale up or down their infrastructure as their needs change. This flexibility can be particularly beneficial for small businesses that are rapidly growing or seasonal. Data centers, on the other hand, are more limited in their scalability, and require significant upfront planning and investment to allow for growth. That being said, if your company is experiencing steady growth or has a fixed workload, a data center may be a more cost-effective solution.

  4. Reliability – Data centers have a reputation for being reliable and consistent. Companies have complete control over the hardware and software, which allows them to maintain uptime and stability. Cloud computing, on the other hand, is dependent on the provider’s infrastructure and internet connectivity. This can lead to downtime, service interruptions, and fluctuations in performance. However, many cloud providers have invested heavily in improving their reliability with advanced technology like load balancing and redundant servers.

  5. Maintenance and Support – Data centers require regular maintenance and upkeep, which can be costly and time-consuming for companies. Cloud providers handle the maintenance, upgrades, and support for their infrastructure, which can save companies time and money. However, it’s important to choose a provider with a reliable support team and solid track record of timely issue resolution.

Deciding between keeping your data center or moving to the cloud boils down to your company’s needs.

Data centers offer reliability, control, and security, but can be costly and inflexible. Cloud computing provides scalability, cost savings, and easy maintenance, but carries security risks and extra fees. Consider the pros and cons, align with your goals, budget, and growth plans, and consult with a technology expert if needed.

Click here for a post on the environmental impact of moving to cloud vendors.

TCO and the Hybrid Cloud Environment

Many tech execs manage a hybrid cloud environment, with multiple cloud providers and possibly an existing mainframe. Some companies ended up with a hybrid environment because they were early cloud adopters and didn’t get the desired outcomes, prompting them to try another provider. Alternatively, multiple organizations chose different cloud providers without proper decision controls. Many companies selected multiple cloud providers to avoid relying on a single one.

Regardless of a company’s journey, the tech executive strives to optimize performance in this intricate environment. Total cost of ownership can be really out of whack if there are multiple cloud implementations, and the legacy, say mainframe, environment exists as well.

Tech execs are worried as overall tech infrastructure costs rise due to cloud migration.

Their messaging has always been that moving to cloud will reduce costs because the cloud provider will own the equipment, vs. having to maintain hardware in the datacenter. So, this sales job by a tech executive to their leadership can appear to have been inaccurate.

The reality is, moving applications from legacy systems to the cloud can lead to higher costs.

While transition may require some overlap in production, it’s crucial to decommission as much as possible during migration. A detailed plan should demonstrate the cost reduction during the move. Clearing up tech debt in the mainframe environment beforehand is wise to avoid carrying debt to the cloud, which adds to expenses.

Why are organizations stuck with a hybrid environment?

Initially, in the cloud hype, many jumped onboard hoping for immediate savings. However, merely moving a messy app to a new platform means shifting problems to a different environment. In other words, rehosting doesn’t actually solve anything. It’s just a datacenter change without leveraging the cloud provider’s benefits.

Many organizations opted for a different cloud provider due to misunderstandings about deriving value from their initial choice. The act of rehosting merely shifted chaos from one place to another. Failing to leverage the cloud provider’s PaaS offerings resulted in increased costs for the new platform.

A tech exec needs a thorough plan to migrate the legacy environment to the cloud. If going hybrid, understand the total cost of ownership and consider consolidating platforms for cost-effectiveness. Manage legacy decommissioning alongside migration. Simplify and optimize platform management. Use TCO to assess value in a broad environment.

See this post on Total Cost of Ownership and how to calculate.

Legacy Mainframe Environment

Today, tech execs are concerned about the mainframe computer application code. Many companies have had mainframe computers since the 70s and 80s. Large companies, particularly in insurance and finance, built applications during that time that still run on mainframes now. These applications consist of mostly COBOL code, with millions of lines.

SIDE NOTE: COBOL is the oldest still used programming language, developed in 1959. The only other language even close in age is C, which was developed in the early 70’s.

Today’s mainframe computers have powerful processors and seamlessly run COBOL applications alongside Docker containers. Tech executives face challenges with complex COBOL, PL/1, and Assembler code, as well as managing decades of data in diverse environments like DB2, MySQL, and Oracle. We’ll discuss data in a future post.

Mainframe applications have long been vital for enterprise business processing. They were game-changers, and still handle key workloads effectively. However, the drive to convert or move these applications has been slow. Today, tech execs face fierce competition in aggressive markets. Outdated systems hinder companies from keeping up with innovative rivals. Cloud computing enables competitors to invest in new systems without hardware burdens. Consequently, older companies face disadvantages and must modernize their legacy application environment. The three reasons for this transformation are:

  1. Agility: Companies need IT systems that can be updated for functional processing requirements in a timelier manner. Shorter development cycles are a must for organizations to keep pace.

  2. Cost: The mainframe is the costliest computer available. In many organizations, it’s also difficult and time consuming to maintain. The complexity of the code and data environments makes keeping the systems up and running difficult. Modern cloud technologies offer a significant reduction in cost of ownership.

  3. Risk: Knowledge of legacy environments is fading away as programmers who developed this code many years ago retire. Skills in COBOL, PL/1, CICS, etc. are becoming scarce, making managing the applications and responding to major incidences more challenging.

To remain competitive, organizations must tackle legacy mainframe systems. The transformation should uncover the current state and map out an ideal future state. Develop a value proposition with a total cost of ownership analysis for transitioning to the cloud. When it comes to maintaining the mainframe and harnessing the power of the cloud, it’s worth considering strategies from industry leaders like IBM. Take into account the costs of migration and retooling, but also weigh them against the benefits of ownership. Furthermore, take the time to explore the numerous advantages that cloud computing has to offer.

Click here for a post on deciding whether to move from AS400’s or not.

Don’t Jump Too Quick

As the chief tech exec of your company, you are faced with a nervous CEO/CFO/COO who wants to cut costs amidst economic uncertainties. Understanding the potential benefits of cloud technology in reducing total cost of ownership, they have tasked you with exploring the expedited migration of certain application assets to the cloud. How will you proceed?

We talked in past posts about technology strategy. If you have one in place, this can help you convince the leadership that jumping too fast to a new platform can be detrimental to the organization.

You have not retooled your processes or retrained your staff, and you certainly don’t know the impact on the business by moving too quickly. What about contact center support?

In our technology strategy we have the following step:

7. Envision Target State and Assess Gaps: The IT mandate and analysis of the current environment should allow for definition of strategic goals and a conceptualization of the future state.  To achieve this goal, follow an Enterprise Architecture modeling approach that allows for depiction of a potential future state.

By clearly defining what the target environment should look like – to support the business goals – you will have an idea for the training needs of the organization. You should also begin to understand the costs of such a move, and the impacts on how the business operates today.

Manage expectations

Operating without a well-thought-out strategy exposes technology leaders to inherent vulnerabilities.

You will not only have a difficult time maintaining the target platform because you won’t have the correct skills, but you’ll also need to support the current platform because you’ll need overlap… assuming you can even move everything. Don’t underestimate the cost to train your people and support your technology environments.

If you are re-platforming (just moving) applications from on-site to the cloud, you’ll find that they may not perform as well and need retrofitting, which is costly. This could also leave you with technical debt, such as back versions of software, and systems components that could be eliminated before a cloud move.

In general, transitioning your cloud (or any platform) may result in increased expenses for the company and potential job loss. Ultimately, it is crucial to leverage your strategy when considering technology shifts to ensure comprehensive planning and mitigate risks effectively.

error: Content is protected !!