* – This article has been archived and is no longer updated by our editorial team –
Terry Shea is member of the senior management team at Kublr, the developer of a leading enterprise Kubernetes management platform. Currently, Terry is working with the Kublr team on ensuring that enterprise customers can leverage containers and Kubernetes across multiple environments, on-premise or in different clouds.
Below is our recent interview with Terry:
Q: What does it mean to have cloud native applications?
A: The term can be confusing because cloud native is not limited to the public cloud. The Cloud Native Computing Foundation, the organization responsible for Kubernetes and other projects, defined cloud native applications as having three characteristics; they are: containerized, dynamically orchestrated, and microservices oriented.
Docker popularized containers and created open source tooling that enables developers to easily use them. Kubernetes was open-sourced by Google, and now is the only real choice for orchestrating containers at scale. And Microservices are becoming increasingly popular and service meshes like Istio and Linkerd are emerging to handle operational issues with large scale microservices deployments.
Recommended: AUTOsist Gives You Simple Fleet Maintenance & Management Software
Q: Why do you think we’re seeing growing interest in cloud native approaches?
A: The growth in cloud native applications are part of a larger trend in the evolution of how applications are developed, iterated, and managed. From an organizational viewpoint we see companies moving to a DevOps culture where Dev and Ops teams are in closer alignment and have more shared responsibilities. This is happening across industries and is a response to the increasing digital interaction with customers and suppliers. Companies need to measure the effectiveness of their digital interactions with their customers and rapidly improve their applications.
In financial services, many of the Fintech start-ups that are disrupting traditional firms are using cloud native approaches. And they’re a real threat to the established order. For example, a recent U.S. Government report noted that:
•3,300 fintech firms were created between 2010 and 2017
•Financing of fintech firms reached $22 Billion in 2017
•Personal loans by these firms went from 1% to 36% of loans in that period
Q: What do you see as the two biggest challenges that financial services firms face when moving to cloud native architectures?
A: Traditional financial service companies face two primary obstacles when moving to cloud native architectures; regulatory compliance and legacy monolithic back-end applications.
Traditionally regulators in the U.S. and Europe tell banks and other regulated financial services companies “what to do”, not “how to do it”. This includes directives to manage service providers, including cloud providers, and to have contingency plans in place in case there are problems with the service provider. Application portability should be a key consideration in these contingency plans, and correctly designed cloud native applications can be a key enabler of portability.
The second challenge is that most established financial services firms can’t or won’t get rid of monolithic core applications overnight. Established financial services firms will need to architect hybrid applications with cloud-native front-ends running either in the cloud, in their data centers, or both, and connecting to back-end services running in the data center.
Recommended: Azulle Has Grown Into A Leading Brand For Fanless Mini PCs And Mini PC Sticks
Q: What are some considerations that financial services firms should look at before going cloud native?
A: Being able to develop, run, and manage cloud-native applications in multiple environments means financial services must consider how they will address some key issues:
•Do you need the massive scalability of the cloud? To be specific, from a Kubernetes stand point will horizontal pod autoscaling be sufficient, or will you need node autoscaling?
•Does this application talk to a monolithic application on our back-end like a core banking system? If so, how will I regulate the impact of front-end volume on back-end resources?
•The rapid iteration and innovation enabled by containers, Kubernetes, and the other cloud-native technologies is associated with much higher frequency of application releases. How do your current, dev, QA, and release processes align with a faster release schedule? Do you need to change your processes?
•Monitoring cloud-native applications requires a new stack, which may include FluentD, Prometheus, and maybe the ELK stack. How will I scale both cluster and application monitoring and provide the right visibility and alerts to my Dev and Ops teams?
•Trouble-shooting microservices requires tracing capabilities provided through Jaeger, Zipkin and other solutions. These are newer tools that many organizations are not familiar with.
•Securing this “new stack” includes implementing container scanning, trusted registries, integration with IAM for admins, and securing communication internal to Kubernetes nodes, at a minimum.