Database is collection of content, structured or unstructured, that resides on a cloud infrastructure.Data Analytics is the process of examining data sets in order to draw conclusions about the information they contain.
Support for millions of transactions and at the same time be agile, flixible and scale.Analytics help in making more informed business decisions for Enterprises and also help in validating reserarch models, theories and hypotheses.
Processing large quantities of data as fast as possible (near real-time).
Solves communication complexity for data processing Helps in real-time data movement, predictive maintenance, fraud detection, IoS, QoS etc.
Source Control Management.
One codebase tracked in revision control, many deploys.
Image Management for the Application.
Run across all infrastructure – container or otherwise
Continuous Integration – software development practice where work is Integrated frequently
Continuous Delivery – software development practice where software is built in such a way that it can be released into production at any time.
Increase Code Coverage Deploy code to production faster Build faster and frequently Never Ship Broken Code Decrease code review time Build repeatable processes
PaaS / Container Service
Application Platforms (PaaS/aPaaS) on Infrastructure Platforms (IaaS).
Designed specifically to accelerate developer velocity and reduce operational overhead. Enables developers to self-provision and manage the applications they developed further compressess the turnaround time from inception to release to feedback to iteration, dovetailing with the growing popularity of agile software development.
Serverless / Event-based
Concept of building and running applications that do not require server management. It’s a finer-grained deployment model where applications, bundled as one or more functions, are uploaded to a platform and then executed, scaled, and billed in response to the exact demand needed at the moment.
Zero Sever Ops whiich means No provisioning, updating, and managing server infrastructure and having flexible scalability No Compute Cost When Idle.
Ability to report over all system health. “Monitoring” is being on the lookout for failures, which in turn requires us to be able to predict these failures proactively. “observability”, aims to provide highly granular insights into the behavior of systems along with rich context, perfect for providing visibility into implicit failure modes and on the fly generation of information required for debugging.
With the network and underlying hardware failures being robustly abstracted away, leaving us with the sole responsibility to ensure our application is good enough to piggy bank on top of the latest and greatest in networking and scheduling abstractions. Need to identify most failures that will arise from the application layer or from the complex interactions between different applications. Can focus on the vagaries of the performance characteristics of application and business logic.
logging capabilities from the application to its environment.
To perform continuous monitoring and have the ability to troubleshoot any anomaly during or after runtime.
Scheduling & Orchestration
With Applications scaling, ability to manage each host system and abstract away the complexity of the underlying platform.
Maximize resource utilization while balancing the constantly changing demands on systems while also meeting the need to be fault-tolerant.
Coordination & Service Discovery
Tools to help with service discovery and co-ordinate between these services.
Being fast and flexible helps in service discovery, health checking, enables storing dynamic configuration, feature flagging, cordination, leader election etc.
Service mesh is a dedicated infrastructure layer for making service-to-service communication safe, fast, and reliable.
In a cloud native model, a single application may comprise of hundreds of services with each service may be having thousands of instances and each of those instance may be in a constantly-changing state as they are dynamically scheduled by an orchestrator. Hence this makes the service communication incredibly complex, pervasive and fundamental part of runtime behavior. Managing it is vital to ensure end-to-end performance and reliability.
Storage having an ability to be containerized, dynamically managed, persistent when needed and is micro-services oriented.
All data updates must be atomic and to a durable, decoupled and shared persistence layer. Data access must be concurrent (asynchronous) so that we scale application performance linearly and maintain application availability. The data layer must be elastic and durable so that we can support constant data growth without disrupting the service. Data should have a flexible structure and schema to allow continued development of new features and application versions.
Software that executes containers and manages container images on a node.
A container runtime enables users to make effective use of containers by providing APIs and tooling that abstract the low level technical details. Helps in: Container Lifecycle Management Image Management Container and snapshot metrics.
Cloud Native Network
Network Segmentation & Policy SDN & APIs.
IP for each container Port clash disappers Eases Discovery Distributes routing Distributes Policy Enforces policy and forwards to the right destination Works at scale
Focus on managing resources: storage, compute, machine instances and even containers in an automated way.
Environments really become immutable, more resilient and easier to manage.
Automate Infrastructure Configuration – automate routine tasks as much as possible.
More time for mission critical tasks Eases management of complex & diverse environments Allows rapid scale-in/scale-out.
Repository for storing container images. Container Image consists of many files which encapsulates an application.
Repeated installs from the same image Same application can be shipped from one host to another Efficient.
Securing Container Images.
“Efficient & Secure RBAC to the images Image scanning for vulnerabilities”.
Allows management of encryption keys in the cloud.
Provides security for sensistive digital data in the cloud.
Based on the standard cloud computing model, in which a service provider makes resources, such as virtual machines (VMs), applications or storage, available to the general public over the internet. Public cloud services may be free or offered on a pay-per-usage model.
Reduces need for organizations to invest in and maintain their own on-premises IT resources Enables scalability to meet workload and user demands Fewer wasted resources because customers only pay for the resources they use Multi-Tenanat
Private cloud is a type of cloud computing that delivers similar advantages to public cloud, including scalability and self-service, but through a proprietary architecture.
Single tenant Architecture On-premise hardware Direct control of underlying cloud Infrastructure.