Thủ Thuật Hướng dẫn How does the AWS Cloud increase the speed and agility of execution for customers? (Select two) Chi Tiết

You đang tìm kiếm từ khóa How does the AWS Cloud increase the speed and agility of execution for customers? (Select two) được Update vào lúc : 2022-09-18 12:50:25 . Với phương châm chia sẻ Bí quyết Hướng dẫn trong nội dung bài viết một cách Chi Tiết 2022. Nếu sau khi Read Post vẫn ko hiểu thì hoàn toàn có thể lại Comment ở cuối bài để Tác giả lý giải và hướng dẫn lại nha.


By Phil de Valence, Principal Solutions Architect for Mainframe Modernization AWS

Nội dung chính

  • Quality of Service Prerequisite
  • The 12 Agility Attributes Needed for Mainframe Workloads
  • 1. Agile Development with CI/CD
  • 2. Integrated Development
  • 3. Knowledge-Based Development
  • 4. Service-Enabled and Modular Applications
  • 5. Elasticity with Horizontal Scalability
  • 6. On-Demand, Immutable, Disposable Servers
  • 7. Choice of Compute, Data Store, and Language
  • 8. Broad Data Store Access
  • 9. Pervasive Automation
  • 10. Managed and Fully-Managed Services
  • 11. Consumption-Based Pricing
  • 12. Innovation Platform
  • Short-Term Architecture with the 12 Agility Attributes
  • Accelerated Migration of Mainframe Workloads Toward Agility
  • Short-Term Migration vs. Rip and Replace
  • Go Build for Agility
  • Which of the following are benefits of the AWS cloud select two?
  • Which of the following are ways to improve security on AWS choose two?
  • What is AWS cloud agility?
  • Which AWS service or feature allows users to connect with and deploy AWS services programmatically?

Amazon Web Services (AWS) CEO Andy Jassy says, “The
main reasons that organizations are moving to the cloud are speed and agility.”

Mainframes typically host core business processes and data. To stay competitive, customers have to quickly transform their mainframe workloads for agility while preserving resiliency and reducing costs.

There is a challenge in defining the agility attributes and prioritizing the corresponding transformations for maximum business value in the least amount of time.

In this post, I will describe
practical agility attributes needed by mainframe workloads, and how to accelerate the transformation towards such agility with Amazon Web Services (AWS).

Quality of Service Prerequisite

For mainframe workloads with critical business applications, high quality of service with high security, high availability, high scalability, and strong system management is fundamental. This is a prerequisite for any platform executing business-critical workloads.

The AWS Cloud meets or
exceeds demanding mainframe workload non-functional requirements by leveraging AWS Well-Architected best practices combined with fit-for-purpose AWS compute, data, storage, and network services.

Once these are in place, agility further improves security, availability, and scalability with up-to-date security protections, rapid incident responses, and reactive resource allocation. Quality of service is not
the topic of this post, but if you have questions about it, feel không lấy phí to contact us or reach out to your existing AWS contacts.

The 12 Agility Attributes Needed for Mainframe Workloads

Agility is the ability of a business to quickly and inexpensively respond to change. In the context of IT in general and mainframes in particular, change refers to modifying applications and infrastructures to alter functionality,
configurations, or resources. Other domains, including market, culture, organization, and processes, also affect business agility, but this post focuses on the mainframe platform.

The most important aspect of agility is speed. We can measure speed by metrics such as time-to-market, provisioning time, adoption time, experimentation pace, capacity sizing lag, implementation speed, delivery cycles, and more.

However, agility also requires low cost of change because the higher the cost,
the more blockers, hurdles, and financial constraints appear.

To become agile, mainframe workloads need to adopt 12 attributes:

  • Agile development with CI/CD
  • Integrated development environment
  • Knowledge-based development
  • Service-enabled and modular applications
  • Elasticity with horizontal scalability
  • On-demand, immutable, disposable servers
  • Choice of compute, data store, and language
  • Broad data store access
  • Pervasive
  • Managed and fully-managed services
  • Consumption-based pricing
  • Innovation platform
  • Each attribute facilitates change higher speed and lower cost. Following, for each attribute, I’ll describe the business value, technical aspects, and how it differs from legacy mainframe workloads.

    1. Agile Development with CI/CD

    It accelerates the software development process and velocity for quicker time-to-market. It leverages DevOps best practices.
    It can rely on continuous integration and continuous delivery (CI/CD) pipelines, which automate the code build, test, deployment, and release process. That increases speed. It also increases quality via automated security checks, coding standards verification, static code, and complexity analysis.

    Agile development with CI/CD differs from waterfall development cycles used in legacy mainframe development to deliver releases only once or twice a year.

    2. Integrated Development

    It increases code development efficiency and facilitates the onboarding of new developers and skilled talents. Unified, modern integrated development environments (IDE) rely on popular Eclipse or Visual Studio IDEs because they have productivity features such as smart editing, debugging, instant compilation, and code refactoring.

    IDE contrasts with legacy mainframe development, which can still use outdated terminal emulation, text interfaces, function keys, and column
    requirements, along with peculiar dataset, job, and spool concepts.

    3. Knowledge-Based Development

    It increases understanding and quality when changing the code in complex multi-million line-of-code application portfolios. It allows developers to ramp up quickly and become confident when making code changes. It can also restore lost knowledge. It relies on analyzer capabilities for impact analysis, program and data dependency discovery, code change, and refactoring planning.

    the mainframe side, it can take years before a developer can start feeling comfortable with large amounts of code. It’s also common to find situations where the expertise and knowledge has been lost, so new developers are afraid to change the code for fear of breaking it.

    4. Service-Enabled and Modular Applications

    They provide broad access to reusable services combined with deployment flexibility for using a fit-for-purpose infrastructure. They result in modular macroservices or
    microservices aligned with business domains or business functions. Each modular service or code can be deployed independent of infrastructure services, with different quality of service levels that include security, availability, elasticity, and systems management.

    The decoupling and service enablement of these applications typically relies on APIs such as RESTful interfaces. It can also involve decoupling large or shared application dependencies. It differs from most decades-old mainframe
    programs that are tightly-coupled and intertwined.

    5. Elasticity with Horizontal Scalability

    It allows scaling out numerous resources to process massive amounts of transactions, and to instantly grow and shrink capacity as the business needs change. It aligns the resource consumption and bill with the business needs and load.

    It leaves behind expensive unused capacity, peak capacity sizing, and over-provisioning resources up front, so it can handle peak levels of business
    activity in the future. It also removes the limitations and bottlenecks from finite physical machines that rely on vertical scaling.

    Elasticity with horizontal scalability contributes to higher availability with many more execution nodes and a smaller blast radius across multiple data centers. Such elasticity requires stateless and share-nothing execution nodes, which can be achieved following The Twelve-Factor App best practices. That
    means any shared data, any temporary queue, and any shared resource lock must be externalized from applications to data stores.

    On the mainframe side, scalability is mostly vertical, within a small number of physical machines with bounded capacity.

    6. On-Demand, Immutable, Disposable Servers

    These types of servers, their software, and their resources can be deployed globally by self-service in minutes, getting from idea to implementation several orders of magnitude faster
    than before. Resources are deployed reliably, and in virtually any quantity all over the world. That means treating servers like cattle, not pets.

    Because their resources are disposable, their design mandates no single point of failure, with ample redundancy and strong failover capabilities. An immutable resource or server is never modified, but merely replaced entirely from a trusted source or version. This lowers risks and complexity, and avoids configuration drift.

    They contrast
    with mainframe environments, where it can take months to order, receive, install, and configure a new machine, new component, or new software.

    7. Choice of Compute, Data Store, and Language

    This choice allows flexibility for deploying a workload to the infrastructure that has the required quality of service. In other words, the right tool for the right job. Some applications require massive scalability, while others need very low latency, or have specific technical requirements.

    Choice allows a fit-for-purpose compute and data store selection. The application stack should minimize platform-specific dependencies to accommodate various compute deployment options. These can include elastic compute, containers, or functions, and server-based or serverless options or specific processor architectures.

    Similarly, the data model and data access should accommodate the best-suited data store, whether relational, in-memory, no-SQL, graph, or document databases. For the
    programming language, the tools and architecture should tư vấn and favor the co-existence of various languages in a polyglot architecture. Each development team ought to choose their own programming language based on culture, history, frameworks, and language advantages.

    This type of choice leaves behind the vendor lock-in and limited legacy options of mainframes.

    8. Broad Data Store Access

    It provides data access and data integration for a wide variety of use cases,
    including data analytics, predictive analytics, business intelligence, new mobile or voice channels, and innovations. It requires a modern data store interface such as an HTTP REST endpoint, or a database driver and APIs for popular programming languages.

    Broad data store access also requires that data be stored in popular character encoding and data structure for easier integrations. It can involve some degree of data normalization, depending on the data store type.

    It avoids the
    challenges we face with archaic indexed data files and EBCDIC encoding in mainframes.

    9. Pervasive Automation

    It increases the speed and quality for the many facets of application development, resource provisioning, configuration management, monitoring, compliance, and governance. It relies on APIs and Command Line Interfaces (CLI) for managing resources, as well as languages or templates for managing
    Infrastructure-as-Code (IaC).

    IaC represents an infrastructure in code so it can be designed and tested faster and cheaper. As with application code development, IaC allows modularizing, versioning, change tracking, and team-sharing. It helps you benefit from automation in all aspects of an infrastructure, including numerous servers, numerous resources, network, storage, compute, databases, and monitoring,
    across all regions globally.

    Pervasive automation differs from mainframes where most compute, network, storage, and software are configured manually or with manually customized jobs.

    10. Managed and Fully-Managed Services

    These infrastructure services are easier to use, accelerate adoption, minimize operational complexity, and allow customers to focus on application business value instead of infrastructure. They also reduce the cost of ownership.

    For example, a managed
    database service minimizes and automates time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups. For its part, a fully-managed code build service is highly available and scalable with no server to manage, and no software to install, patch, or update.

    It contrasts with a mainframe environment where software and resources need to be explicitly and manually configured with complex topologies.

    11. Consumption-Based Pricing

    allows paying only for what you use on a pay-as-you-go basis. Consequently, it aligns the infrastructure cost with business activity, avoiding wasting money when resources are not used. More importantly, it provides a low cost of entry for experiments, fostering innovation and new business. It provides pricing with no commitment, no long-term contract, and no termination fee.

    Consumption-based pricing differs from the mainframe high cost of entry, high cost for software licenses, and slow
    procurement cycles.

    12. Innovation Platform

    It provides the freedom to extend the core workload by building nearly anything you can imagine based on a wide variety of technology building blocks. It allows quick experiments a low cost, along with the ability to fail fast and learn fast.

    The ecosystem enabled by such a platform puts extensive breadth of services and software the builder’s fingertips. It should have the most services and features including compute,
    storage, databases, networking, data lakes and analytics, machine learning, artificial intelligence, Internet of Things (IoT), security, and much more.

    In addition, it should provide a marketplace to deploy even more solutions from third-party vendors in minutes, with seamless procurement. The platform should also demonstrate a high pace of innovation, frequently adding new services and features.

    It contrasts with a mainframe environment where installing a new solution requires long
    procurement, where experimentation is minimal, and where innovation is limited.

    Short-Term Architecture with the 12 Agility Attributes

    Once we have identified these attributes, we want to know how can we adopt them quickly, and how can we maximize our business agility in the least amount of time.

    Mainframe workloads can be transformed to introduce these agility attributes in the short-term, meaning in less than one or two years for an average mainframe workload. The agility
    attributes are provided by a combination of transformation tools and AWS Cloud platform features.

    Let’s take a look the short-term architecture for a mainframe workload migrated to AWS.

    Figure 1 – Short-term architecture of a mainframe workload migrated to AWS.

    The migrated workload is deployed as macroservices on elastic compute and on managed relational databases. Both the macroservices business logic and data are exposed to other applications, channels, or analytics for innovations.

    The application development is supported by a complete DevOps CI/CD pipeline that deploys to AWS production macroservices. The CI/CD components also tư vấn and accelerate the development lifecycle for applications still remaining on the mainframe.

    provide the 12 agility attributes, this architecture leverages numerous AWS Cloud and tool features.

    Agility Attribute
    Key Capabilities of AWS Cloud or Tool
    1. Agile development with CI/CD

    AWS provides fully-managed services for a CI/CD pipeline including AWS CodePipeline, AWS CodeCommit, AWS CodeBuild, and AWS CodeDeploy.

    a pipeline can tư vấn many languages, including COBOL, PL/1, Java, C#.

    Read this post that explains how to Enable Agile Mainframe Development, Test, and CI/CD with AWS and Micro Focus for an example of a COBOL pipeline for a mainframe workload. This type of pipeline can result in code deployed in production on the mainframe or on AWS.

    2. Integrated Development Environment (IDE)
    A modern IDE can be deployed within on-demand Amazon Elastic Compute Cloud (Amazon EC2) instances or within Amazon WorkSpaces. Such an IDE can tư vấn COBOL, PL/1, Java, C#, and other languages, and is often based on the popular Eclipse or Visual Studio IDEs.
    3. Knowledge-based development
    This capability is typically provided by code analyzer tools. They tư vấn COBOL, PL/1, Natural, Java, C#, and other languages. They can be collocated or available with the IDE for enhanced developer productivity.
    4. Service-enabled and modular applications

    Service enablement is performed during the migration to AWS through the migration tool.

    For example, a COBOL program or Java class for a business function is exposed as a service via RESTful API. Depending on the chosen granularity and dependencies, it can be a macroservice or a microservice.

    Independent groups of programs become modular and deployable onto fit-for-purpose compute resources, communicating via the service interfaces. If need be, the tool facilitates
    service extraction in order to create even more granular services towards microservices.

    For an example, read How to Peel Mainframe Monoliths for AWS Microservices with Blu Age.

    5. Elasticity with horizontal scalability

    The stateless and share-nothing application stack is created by the migration tool or provided by the middleware layer. It can follow The Twelve-Factor App best practices.

    Typically, shared data is externalized in a relational database such as Amazon Relational Database Service (Amazon RDS) or Amazon
    Aurora, or in an in-memory data store such as Redis-based Amazon ElastiCache.

    On the application side, elasticity is facilitated by AWS Auto Scaling across availability zones and data centers. On the database side, horizontal scalability is facilitated by replicas or
    Amazon Aurora Multi-Master.

    You can find elastic solution examples in this post about Empowering Enterprise Mainframe Workloads on AWS with Micro Focus, and this one about
    High-Performance Mainframe Workloads on AWS with Cloud-Native Heirloom.

    6. On-demand, immutable, and disposable servers

    Macroservices or microservices are deployed on-demand and globally on Amazon EC2 instances, or Amazon Elastic Kubernetes Service (Amazon EKS) containers or AWS Lambda functions.

    With a stateless and share-nothing application stack, the application servers are disposable. With an Auto Scaling Group and a Launch Template, the instances and nodes are immutable.
    With serverless services such as AWS Fargate, you no longer need to provision and manage servers for Amazon EKS containers.

    7. Choice of compute, data store, and language

    AWS provides a wide selection of compute resources to meet the variety of mainframe workloads requirements. We have EC2 instance types of all sizes, with one instance possessing up to 224 CPU cores, 24 TB of memory, 100 Gigabit networking (High Memory instances), or 4.0 GHz vCPU clock speed (z1d instances).

    AWS compute resources also accommodate containers with
    Amazon Elastic Container Service (Amazon ECS) and Amazon EKS, and serverless functions with AWS Lambda functions.

    For the data stores, many relational databases are available via Amazon RDS, including the popular Amazon Aurora. For other data store types, you’ll also find the broadest selection of purpose-built databases on AWS.

    This AWS choice requires
    the execution environment or the tool to tư vấn these services via a platform-independent, platform-agnostic, or compatible technical stack.

    Regarding languages, tools and IDEs accommodate the customer language of choice, whether it is COBOL, Java or C#, within a polyglot architecture. You can read about how to leverage such choice for mainframe workloads in this post about
    Automated Refactoring from Mainframe to Serverless Functions and Containers with Blu Age.

    8. Broad data store access

    The Amazon RDS or Amazon Aurora databases can be broadly accessed over a TCP/IP network using endpoint hostnames and common APIs. Other AWS data stores also have TCP/IP or HTTP-based communication protocols for broad access.

    Tools or the middleware layer make the data available in ASCII-based encodings, normalized for reuse.

    9. Pervasive automation

    Infrastructure automation is first enabled by AWS Command Line Interface (CLI), AWS Software Development Kits (SDKs), AWS Cloud Development Kit (AWS CDK), and AWS CloudFormation.

    Automation is complemented by management services with
    automation such as AWS Systems Manager or AWS OpsWorks. In general, AWS encourages and facilitates infrastructure-as-code to enable automation in most infrastructure management activities.

    Application automation can use all the AWS code services mentioned
    previously for CI/CD.

    10. Managed and fully-managed services

    AWS offers a large selection of managed and fully-managed services with varying levels of automation to balance efficiency and simplification with control.

    We can name some of those we found in the short-term architecture for a mainframe workload: AWS Elastic Beanstalk for elastic compute, Amazon EKS for Kubernetes containers,
    Amazon RDS for relational databases, Elastic Load Balancing, the AWS code services, and the system management services such as Amazon CloudWatch, AWS Backup, or AWS CloudTrail.

    11. Consumption-based pricing

    AWS offers a pay-as-you-go approach for pricing for over 175 cloud services. With AWS, you pay only for the individual services you need, for as long as you use them, and without requiring long-term contracts or complex licensing.

    You only pay for the services you consume, and once you stop using them, there are no additional costs or termination fees. Furthermore, AWS encourages cost optimization using the many
    AWS cost management services along with cost optimization best practices.

    12. Innovation platform

    With more than 175 services, AWS has more services, and more features within those services, than any other cloud provider.

    All these services are readily available your fingertips to test new ideas quickly. On top of this, AWS Marketplace offers thousands of software listings ready to be deployed from independent software vendors (ISV).

    This rich ecosystem is growing with a fast pace of innovation that
    delivers new AWS features to the platform every year.

    Accelerated Migration of Mainframe Workloads Toward Agility

    The preceding architecture provides the majority of AWS benefits in the short term for a mainframe workload.

    We typically take one or two years to transform and migrate a legacy mainframe workload with a few million lines of code consuming thousands of mainframe MIPS. Such migration is technically complex. We are successful in delivering the benefits of agility in the short-term only if we
    reduce the risks and accelerate the migration with highly automated, mature, and fit-for-purpose tools or accelerators.

    The majority of our customers leverage short-term migration tools using Middleware Emulation, Automated Refactoring, or a combination of both. You can learn more about some of their key characteristics and find example tool names in this post about
    Demystifying Legacy Migration Options to the AWS Cloud.

    Beware that some tools may look alike initially, but can differ drastically in their target architecture and capabilities. For example, basic code recompilation or basic code conversion is limited. We typically need some middleware enablers or tool-based refactoring to allow service enablement, modularity, elasticity
    on-demand, and disposable servers.

    This is the reason we recommend doing a detailed technical analysis about how a particular solution facilitates or inhibits the 12 agility attributes.

    When using such tool or accelerator, the migration is flexible enough to accommodate business priorities with incremental transitions. Such an incremental approach accelerates value delivery with achievable and tangible business benefits every step. It allows:

    • Incremental transition from
      mainframe to AWS.
    • Incremental transition from legacy language to a language of choice (COBOL, Java, C#).
    • Incremental transition from bounded capacity to elastic compute, to containers, or to functions.
    • Incremental transition from monolith to macroservices and microservices.

    Short-Term Migration vs. Rip and Replace

    Sometimes we encounter customers or systems integrators that are trying to achieve an agile architecture with large-scale manual rewrites or
    re-engineering projects. This approach is known as rip and replace, which accumulates risks on many dimensions, including considerably higher costs, duration over many years, financial drift, manual software development bugs, and inconsistent business functions.

    These risks result in numerous $10M+ mainframe rewrite projects that fail and make news headlines. So, the key question becomes: Is there more or less value expected from such long-term manual rewrite compared to the short-term
    migration toward the 12 agility attributes?

    A 2022 Gartner study concluded that:

    “Application leaders should instead manage their portfolio as an asset, removing impediments and executing continuous business-driven modernization to provide optimum value.”

    2022 IDC paper confirms they are:

    “… seeing a shift from a rip and replace approach toward modernization strategies that are aimed gaining significant business value in the form of agility, new business capabilities, and a reduction in total cost of ownership (TCO) and risk.”

    Contrary to large-scale manual rewrites,
    an accelerated tool-based migration reduces the risks and secures agility business benefits in the short term. We further reduce the mainframe migration risks with extensive risk mitigation best practices that cover business, technical, and project aspects.

    In addition to the agility benefits, customers migrating a mainframe workload with AWS typically benefit from substantial cost savings in the 60-90 percent range with a quick return on investment. These savings can finance subsequent
    workload migration or innovation.

    You can learn about some of our mainframe migration customer stories with The Tp New York Times, Vanguard, US
    Department of Defense, Capital One, and Sysco.

    Moving to AWS and adopting the 12 agility attributes is only the first stage in a modernization journey. Once on AWS, customers quickly iterate for optimizing the workloads and for integrating with the next generation of innovations.

    Go Build for Agility

    Solutions and
    tools are currently available on the market to satisfy all 12 agility attributes for mainframe workloads. AWS and its ecosystem are uniquely positioned to facilitate them.

    When designing and building the next target architecture for a mainframe workload, insist on the highest standards and request all 12 agility attributes so you can maximize the business benefits of your mainframe transformation in the short term.

    We encourage you go build with a hands-on pragmatic approach that
    demonstrates feasibility and value in a proof of concept or pilot. You can also learn more about success stories and solutions for mainframes in our blog posts about Mainframe Modernization.

    Which of the following are benefits of the AWS cloud select two?

    The Benefits of AWS. Ease of Use. … . Incredibly Diverse Array of Tools. … . Unlimited Server Capacity. … . Reliable Encryption & Security. … . Managed IT Services Are Available. … . AWS Offers Flexibility & Affordability..

    Which of the following are ways to improve security on AWS choose two?

    Top 10 security items to improve in your AWS account. 1) Accurate account information. … . 2) Use multi-factor authentication (MFA) … . 3) No hard-coding secrets. … . 4) Limit security groups. … . 5) Intentional data policies. … . 6) Centralize CloudTrail logs. … . 7) Validate IAM roles..

    What is AWS cloud agility?

    Business agility is the ability of an organisation to quickly adapt to market changes, to respond rapidly and flexibly to customer demand and to be continuously a competitive advantage.

    Which AWS service or feature allows users to connect with and deploy AWS services programmatically?

    AWS CodeDeploy fully automates your software deployments, allowing you to deploy reliably and rapidly. You can consistently deploy your application across your development, test, and production environments whether deploying to Amazon EC2, AWS Fargate, AWS Lambda, or your on-premises servers.
    Tải thêm tài liệu liên quan đến nội dung bài viết How does the AWS Cloud increase the speed and agility of execution for customers? (Select two)

    Chia sẻ

    Review How does the AWS Cloud increase the speed and agility of execution for customers? (Select two) ?

    You vừa đọc tài liệu Với Một số hướng dẫn một cách rõ ràng hơn về Review How does the AWS Cloud increase the speed and agility of execution for customers? (Select two) tiên tiến và phát triển nhất

    Chia Sẻ Link Cập nhật How does the AWS Cloud increase the speed and agility of execution for customers? (Select two) miễn phí

    Pro đang tìm một số trong những Chia SẻLink Download How does the AWS Cloud increase the speed and agility of execution for customers? (Select two) Free.

    Hỏi đáp vướng mắc về How does the AWS Cloud increase the speed and agility of execution for customers? (Select two)

    Nếu sau khi đọc nội dung bài viết How does the AWS Cloud increase the speed and agility of execution for customers? (Select two) vẫn chưa hiểu thì hoàn toàn có thể lại phản hồi ở cuối bài để Ad lý giải và hướng dẫn lại nha
    #AWS #Cloud #increase #speed #agility #execution #customers #Select