Quantcast
Channel: Amazon Web Services – GlobalDots – CDN, Security and Performance Solutions
Viewing all 56 articles
Browse latest View live

7 Trends Driving Enterprise IT Transformation in 2019

$
0
0

Enabling the business outcome in a ‘Real-Time’ enterprise environment is the next challenge for global brands and government agencies in 2019.

Tech companies will need to drive hard to continually exceed to their customers’ expectations during a time of accelerating change. They will need to show how technology can help deliver on their customers’ objectives, improve agility, security and impact, or they risk being disrupted.

Here is Verizon Enterprise Solutions’ view of those enterprise technology trends that are most likely to impact our global business and government customers in 2019.

Foundational technologies – Software Defined Networks, 4G, the Internet of Things, intelligent video, security, telematics – are already changing the operations of business. In 2019, savvy CIOs will be focusing on how to reinvent their operations to leverage the enormous potential promised by disruptive technologies like 5G, artificial intelligence/machine learning, automation and robotics, augmented and virtual reality and the next-gen cloud including edge computing.

Customer experience (CX) has been a hot topic over recent years, but many of us have had personal experience of the big brands letting us down. With AI infiltrating CX systems, there’s an unprecedented opportunity to move to a principle of ‘personalization for you’, putting the customer back in the center of the business opportunity.

Robotic process automation and machine learning (ML) will transform how business operates – and what skills a business workforce needs. In 2019, educators and businesses will focus on how to build a pool of data scientists and ML specialists to support our future “skills needs”, rather than yesterday’s business requirements.

Read more: Help Net Security

The post 7 Trends Driving Enterprise IT Transformation in 2019 appeared first on GlobalDots - CDN, Security and Performance Solutions.


What is Kubernetes?

$
0
0

Developing software is hard. Even the largest, most successful companies can run into issues when developing new applications – first you have to develop dozens of libraries, packages and other software components, and then you have to make sure your software stacks are up to date, that they’re running smoothly, that they can be scaled according to business needs and so on.

For many years now, the leading way to isolate and organize applications and their dependencies has been to place each application in its own virtual machine. Virtual machines make it possible to run multiple applications on the same physical hardware while keeping conflicts among software components and competition for hardware resources to a minimum.

But virtual machines are bulky—typically gigabytes in size. They don’t really solve problems like portability, software updates, or continuous integration and continuous delivery.

To resolve these issues, organizations have adopted Docker containers. Containers make it possible to isolate applications into small, lightweight execution environments that share the operating system kernel. Typically measured in megabytes, containers use far fewer resources than virtual machines and start up almost immediately. They can be packed far more densely on the same hardware and spun up and down en masse with far less effort and overhead.

what is kubernetes

Why should you use containers?

The old way to deploy applications was to install the applications on a host using the operating-system package manager. This had the disadvantage of entangling the applications’ executables, configuration, libraries, and lifecycles with each other and with the host OS. One could build immutable virtual-machine images in order to achieve predictable rollouts and rollbacks, but VMs are heavyweight and non-portable.

The new way is to deploy containers based on operating-system-level virtualization rather than hardware virtualization. These containers are isolated from each other and from the host: they have their own filesystems, they can’t see each others’ processes, and their computational resource usage can be bounded. They are easier to build than VMs, and because they are decoupled from the underlying infrastructure and from the host filesystem, they are portable across clouds and OS distributions.

Because containers are small and fast, one application can be packed in each container image. This one-to-one application-to-image relationship unlocks the full benefits of containers. With containers, immutable container images can be created at build/release time rather than deployment time, since each application doesn’t need to be composed with the rest of the application stack, nor married to the production infrastructure environment. Generating container images at build/release time enables a consistent environment to be carried from development into production.

Similarly, containers are vastly more transparent than VMs, which facilitates monitoring and management. This is especially true when the containers’ process lifecycles are managed by the infrastructure rather than hidden by a process supervisor inside the container. Finally, with a single application per container, managing the containers becomes tantamount to managing deployment of the application.

Benefits you get by using containers

Here is a summary of container benefits:

  • Agile application creation and deployment: Increased ease and efficiency of container image creation compared to VM image use.
  • Continuous development, integration, and deployment: Provides for reliable and frequent container image build and deployment with quick and easy rollbacks (due to image immutability).
  • Dev and Ops separation of concerns: Create application container images at build/release time rather than deployment time, thereby decoupling applications from infrastructure.
  • Observability Not only surfaces OS-level information and metrics, but also application health and other signals.
  • Environmental consistency across development, testing, and production: Runs the same on a laptop as it does in the cloud.
  • Cloud and OS distribution portability: Runs on Ubuntu, RHEL, CoreOS, on-prem, Google Kubernetes Engine, and anywhere else.
  • Application-centric management: Raises the level of abstraction from running an OS on virtual hardware to running an application on an OS using logical resources.
  • Loosely coupled, distributed, elastic, liberated micro-services: Applications are broken into smaller, independent pieces and can be deployed and managed dynamically – not a monolithic stack running on one big single-purpose machine.
  • Resource isolation: Predictable application performance.
  • Resource utilization: High efficiency and density

What is Kubernetes?

Kubernetes (commonly stylized as K8s) is an open-source container-orchestration system for automating deployment, scaling and management of containerized applications. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation. It aims to provide a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts.

Kubernetes, at its basic level, is a system for running and coordinating containerized applications across a cluster of machines. It is a platform designed to completely manage the life cycle of containerized applications and services using methods that provide predictability, scalability, and high availability.

The central component of Kubernetes is the cluster. A cluster is made up of many virtual or physical machines that each serve a specialized function either as a master or as a node. Each node hosts groups of one or more containers (which contain your applications), and the master communicates with nodes about when to create or destroy containers. At the same time, it tells nodes how to re-route traffic based on new container alignments.

The following diagram depicts a general outline of a Kubernetes cluster:

what is kubernetes

Image Source

As a Kubernetes user, you can define how your applications should run and the ways they should be able to interact with other applications or the outside world. You can scale your services up or down, perform graceful rolling updates, and switch traffic between different versions of your applications to test features or rollback problematic deployments. Kubernetes provides interfaces and composable platform primitives that allow you to define and manage your applications with high degrees of flexibility, power, and reliability.

Conclusion

Kubernetes is an open-source container-orchestration system for automating deployment, scaling and management of containerized applications. It gives you complete control over container orchestration, enabling you to deploy, maintain and scale application containers across a cluster of hosts.  If you have any questions, contact us today to help you out with your performance and security needs.

The post What is Kubernetes? appeared first on GlobalDots - CDN, Security and Performance Solutions.

CloudEndure Confirms Acquisition by Amazon Web Services

$
0
0

It’s official: Amazon Web Services (AWS) has acquired Israeli cloud disaster recovery and backup specialists CloudEndure.

The news had been rumoured over the past few days, yet a short announcement from CloudEndure this morning confirmed the news. The company said no more other than that the acquisition “expands [its] ability to deliver innovative and flexible migration, disaster recovery, and backup solutions.”

CloudEndure offers disaster recovery, continuous backup and migration tools across AWS, Google Cloud Platform, Microsoft Azure, and VMware. Following the acquisition it is unclear as to how these paths will play out, although it is worth noting the CloudEndure website has been redesigned to reflect the news, with the ‘contact us’ form leading directly to a landing page for AWS’ Migration Acceleration Program.

The move can be seen as yet another step on the path to the next evolution of cloud. As AWS, Microsoft and Google have long since emerged victorious in the infrastructure space, the current battleground focuses on cloud management and migration.

Read more: Cloud Computing News 

The post CloudEndure Confirms Acquisition by Amazon Web Services appeared first on GlobalDots - CDN, Security and Performance Solutions.

Rackspace Delivers End-to-End Management for AWS Database Services

$
0
0

Rackspace announced the launch of Rackspace Managed Database Services for Amazon Web Services, a suite of services designed to accelerate customers’ ability to leverage adoption of Amazon Aurora, Amazon Redshift, AWS Glue and Amazon Athena.

As an AWS Partner Network Premier Consulting Partner with nearly 20 years of experience managing complex database environments, Rackspace has developed an end-to-end data management service designed to help customers architect, integrate, and optimize their environments with the relational database management system, data warehouse, and serverless services on AWS. Rackspace Managed Database Services for AWS helps customers prepare, load, and query vast sets of data more efficiently and effectively, while reducing operational costs and time to market.

This new offering is the latest example of how Rackspace is fulfilling its strategy to deliver modern IT as a service for its customers by offering unbiased expertise, a portfolio of cloud solutions, Fanatical Experience, and agile delivery of IT the way customers want.With its new Managed Database Services for AWS, Rackspace enables customers to store and analyze the rapidly increasing amount of data needed to run their organizations and meet their evolving business needs.

Read more: Rackspace Blog 

 

The post Rackspace Delivers End-to-End Management for AWS Database Services appeared first on GlobalDots - CDN, Security and Performance Solutions.

Cloud Computing Dominates Amazon Profits

$
0
0

Amazon reported improved earnings on Thursday, raking in over $3 billion from 2018, but a large portion of this has come through its cloud computing arm, AWS.

The leading cloud provider’s profits grew 45% year on year, continuing an annual trend of growing by at least 40% each year. The cloud division has become crucial for the whole of Amazon for both revenue and profits.

Indeed, the PaaS offerings proved a big hit during the company’s Re:Invent conference in November. Lots of effort and marketing had gone into customisable services, such as the AWS Marketplace for machine learning and AI. This, combined with a whole host of new offerings and new partnerships, such as Fender guitars, Formula 1 and Zurich Insurance, painted a picture of good health for the cloud division of Amazon.

The company as a whole reported its third record profit in a row, thanks to a strong year for cloud computing, advertising and a successful festive period. The Seattle-based company’s 2018 profit of $3.03bn, or $6.04 a share, is up from $1.86bn, or $3.75 a share, on the same quarter a year earlier.

Read more: Cloud Pro UK

The post Cloud Computing Dominates Amazon Profits appeared first on GlobalDots - CDN, Security and Performance Solutions.

AWS Launches Five New Bare Metal Instances to Give Customers Greater Cloud Control

$
0
0

AWS has unveiled five new EC2 bare metal instances to run high-intensity workloads, such as performance analysis, specialised applications and legacy workloads not supported in virtual environments.

The new instances – m5.metal, m5d.metal, r5.metal, r5d.metal, and z1d.metal – have all been designed to run virtualisation secured containers such as Clear Linux Containers. Each offers its own set of resources, with the m5 variations offering 384 GiB memory, the r5 options 768 GiB ( both up to 3.1GHz all-core turbo power) and z1 with 384 GiB, but with up to 4GHz power across 48 logical processors.

AWS has specified that the different levels of bare metal instances have been created for different scenarios. For example, the m5 instances will be useful for web and application servers, as well as back-end servers for enterprise applications and gaming servers. While the r5 models are best suited to high-performance database applications and real-time analytics.

The company’s z1d are best used for electronic design automation, gaming and relational database workloads because of their high compute and memory offerings.

Read more: Cloud Pro UK

The post AWS Launches Five New Bare Metal Instances to Give Customers Greater Cloud Control appeared first on GlobalDots - CDN, Security and Performance Solutions.

What is Serverless Computing?

$
0
0

Cloud computing has brought enormous change to the world of applications. It makes long-standing constraints on application development and deployment disappear. It’s no exaggeration that most of the innovation in IT over the past decade has been enabled, catalyzed, or caused by cloud computing.

Lately, a new cloud-based technology has emerged that has the potential to drastically alter the existing tech ecosystem. It’s called serverless computing.

In this article we define serverless computing, and take a look at its benefits.

What is serverless computing

In the early days of the web, anyone who wanted to build a web application had to own the physical hardware required to run a server, which is a cumbersome and expensive undertaking.

Then came the cloud, where fixed numbers of servers or amounts of server space could be rented remotely. Developers and companies who rent these fixed units of server space generally over-purchase to ensure that a spike in traffic or activity wouldn’t exceed their monthly limits and break their applications. This meant that much of the server space that was paid for usually went to waste. Cloud vendors have introduced auto-scaling models to address the issue, but even with auto-scaling an unwanted spike in activity, such as a DDoS attack, could end up being very expensive.

Serverless is a cloud computing execution model where the cloud provider dynamically manages the allocation and provisioning of servers. A serverless application runs in stateless compute containers that are event-triggered, ephemeral (may last for one invocation), and fully managed by the cloud provider. Pricing is based on the number of executions rather than pre-purchased compute capacity.

serverless computing

Image Source

Why use serverless

Serverless computing offers a number of advantages over traditional cloud-based or server-centric infrastructure. For many developers, serverless architectures offer greater scalability, more flexibility, and quicker time to release, all at a reduced cost. With serverless architectures, developers do not need to worry about purchasing, provisioning, and managing backend servers. However, serverless computing is not a magic bullet for all web application developers.

Serverless computing can simplify the process of deploying code into production. Scaling, capacity planning and maintenance operations may be hidden from the developer or operator. Serverless code can be used in conjunction with code deployed in traditional styles, such as microservices. Alternatively, applications can be written to be purely serverless and use no provisioned servers at all.

The difference between traditional cloud computing and serverless is that you, the customer who requires the computing, doesn’t pay for underutilized resources. Instead of spinning up a server in AWS for example, you’re just spinning up some code execution time. The serverless computing service takes your functions as input, performs logic, returns your output, and then shuts down. You are only billed for the resources used during the execution of those functions.

serverless computing

Image Source

What are the advantages of serverless computing?

Let’s take a look at the advantages serverless computing offers to businesses:

Lower costs 

Serverless computing is generally very cost-effective, as traditional cloud providers of backend services (server allocation) often result in the user paying for unused space or idle CPU time.

No server management

Although ‘serverless’ computing does actually takes place on servers, developers never have to deal with the servers. They are managed by the vendor. This can reduce the investment necessary in DevOps, which lowers expenses, and it also frees up developers to create and expand their applications without being constrained by server capacity.

Simplified scalability

Developers using serverless architecture don’t have to worry about policies to scale up their code. The serverless vendor handles all of the scaling on demand. As a result, a serverless application will be able to handle an unusually high number of requests just as well as it can process a single request from a single user. A traditionally structured application with a fixed amount of server space can be overwhelmed by a sudden increase in usage.

Quick deployments and updates

Using a serverless infrastructure, there is no need to upload code to servers or do any backend configuration in order to release a working version of an application. Developers can very quickly upload bits of code and release a new product. They can upload code all at once or one function at a time, since the application is not a single monolithic stack but rather a collection of functions provisioned by the vendor.

Simplified backend code

Because the application is not hosted on an origin server, its code can be run from anywhere. It is therefore possible, depending on the vendor used, to run application functions on servers that are close to the end user. This reduces latency because requests from the user no longer have to travel all the way to an origin server

Quicker turnaround

Serverless architecture can significantly cut time to market. Instead of needing a complicated deploy process to roll out bug fixes and new features, developers can add and modify code on a piecemeal basis.

Serverless vs Containers

Both serverless computing and containers enable developers to build applications with far less overhead and more flexibility than applications hosted on traditional servers or virtual machines. Which style of architecture a developer should use depends on the needs of the application, but serverless applications are more scalable and usually more cost-effective.

Containers provide a lighter-weight execution environment, making instantiation faster and increasing hardware utilization, but they don’t change the fundamental application operations process. Users are still expected to take on the lion’s share of making sure the application remains up and running.

With serverless, the cloud provider takes on the responsibility of making sure that the application code gets loaded and executed, and it ensures that sufficient computing resources are available to run your code, no matter how much processing it requires.

Conclusion

Serverless is a cloud computing execution model where the cloud provider dynamically manages the allocation and provisioning of servers. Serverless computing offers many benefits compared to standard cloud model, including lower costs, simplified scalability, quicker deployemnd etc.

If you have any questions about serverless computing and how it can help you achieve your business goals, contact us today to help you out with your performance and security needs.

The post What is Serverless Computing? appeared first on GlobalDots - CDN, Security and Performance Solutions.

AWS Unveils Amazon Managed Blockchain to Easily Create, Manage, and Scale Blockchain Networks

$
0
0

Amazon Web Services, an Amazon.com company, announced the general availability of Amazon Managed Blockchain, a fully managed service that makes it easy to create and manage scalable blockchain networks.

Customers who want to allow multiple parties to execute transactions and maintain a cryptographically verifiable record of them without the need for a trusted, central authority can quickly setup a blockchain network spanning multiple AWS accounts with a few clicks in the AWS Management Console.

Amazon Managed Blockchain scales to support thousands of applications and millions of transactions using popular open source frameworks like Hyperledger Fabric and Ethereum.

For customers in businesses like finance, logistics, retail, and energy that need to perform transactions quickly across multiple entities, blockchain gives them the ability to execute contracts and share data, with an immutable record of the transactions, but without the need for a trusted, central authority.

Read more: Help Net Security

The post AWS Unveils Amazon Managed Blockchain to Easily Create, Manage, and Scale Blockchain Networks appeared first on GlobalDots - CDN, Security and Performance Solutions.


CloudBees and AWS Enabling Customers to Deploy CloudBees Core in an AWS Environment

$
0
0

CloudBees, the enterprise DevOps leader powering the continuous economy, announced at the AWS Summit in London the availability of the CloudBees AWS Quick Start solution comprised of CloudBees Core and Amazon Elastic Compute Cloud (EC2) Spot Instances.

Tighter integrations between CloudBees and AWS will enable customers to deploy CloudBees Core in an AWS environment in a matter of minutes and cut cloud consumption costs up to 90 percent by leveraging Spot Instances.

CloudBees Core is a cloud native, continuous delivery (CD) solution that can be hosted on-premise or in the cloud. It provides a shared, centrally managed, self-service experience for development teams.

Leveraging AWS best practices for security and high availability, the CloudBees Core AWS Quick Start solution simplifies deployment to the cloud by reducing hundreds of manual processes to just a few steps.

Read more: Help Net Security

The post CloudBees and AWS Enabling Customers to Deploy CloudBees Core in an AWS Environment appeared first on GlobalDots - CDN, Security and Performance Solutions.

Mastering AWS Cost Optimization: Real-World Technical and Operational Cost-Saving Best Practices

$
0
0

Cloud cost control and optimization has become a leading issue for businesses. As the number of companies utilizing cloud increases, IT professionals are looking for ways to reduce their cloud costs.

Cloud infrastructure offers many benefits for organizations, but it also presents some challenges. Cloud computing benefits include:

  • Efficiency
  • Data security
  • Scalability
  • Mobility
  • Disaster recovery
  • Control

However, it’s also important to understand how moving to the cloud impacts organizations. A major problem that contributes to cloud cost management challenges is the difficulty organizations have tracking and forecasting usage.

Gartner estimates that organizations that have done little or no cloud optimization, are overspending by 70% or more. To address this challenge, organizations are shifting their focus to cost optimization, seeking to gain better visibility into their cloud spend, and deploying governing policies for cloud cost control.

Managing cloud infrastructures can be substantially more complex than traditional data infrastructures; however, cloud infrastructures have the potential to become highly optimized, intelligent systems that improve enterprises. Succeeding on the cloud means making the right business decisions and executing the right technological choices. At scale, these challenges get incredibly complex due to huge amounts of constantly changing data.

All these topics and more, are examined in the book called Mastering AWS Cost Optimization: Real-World Technical and Operational Cost-Saving Best Practices,  co-authored by Eli Mansoor and GlobalDots’ own CTO Yair Green.

Mastering AWS Cost Optimization - Yair Green Eli Mansoor Cover

Why write this book in the first place?

Because cloud computing represents much more than new technology and tools. The costs of cloud computing are related to new pay-per-use pricing models, new consumption models, new operational methodologies, new tracking and reporting systems, and more. Traditional approaches to cost analysis and optimization simply do not apply to public cloud computing.

This book is intended to support you in overcoming what is currently considered one of the top challenges that organizations face in their transition journey towards public cloud: the challenge of cost control and optimization. This applies whether you are part of a technologically-savvy “born-to-the-cloud” team, or whether you are part of an enterprise organization taking its first steps towards public cloud adoption.

Reading this book will give you a better understanding of both the technical and operational aspects of the process. This ensures that you will succeed in taking advantage of advanced technology for building innovative, next-generation products, while doing so in an optimized and cost-effective manner. This book contains many proven technical, operational, and applications-related best practices.
All are real-world best practices that were implemented in the efforts of controlling and reducing the costs of our own cloud infrastructure as well as that of our customers.

Cloud costs management

The first step to taking charge of your cloud spending is being fully informed about your cloud costs and usage. With resources being spun up by people across your organization, this can be a complicated process, but one that’s essential for operating your cloud according to best practices.

In a recent study, 451 Research asked enterprise companies across the US and the UK what methods they use to view cloud costs across their enterprise. The responses broke down into four categories:

mastering aws cost optimization

Image Source

A surprising number of people responded that they have little visibility into their cloud costs. These companies most likely learn about their cloud costs when they get the invoices from their cloud vendors. That means the first time they find out about any overspend is one to two months after the costs were incurred — way too late to take corrective action.

The second limitation is one of customization. These tools are built to integrate with each individual cloud vendor’s offerings, which means cost and usage data is viewed from a foundation built on their service structure. As long as your team structure smoothly fits the cloud provider’s structure, you can get the data you want, but mapping that data directly to your organization becomes a little more complicated.

To follow best practices, you should be able to view your entire cloud infrastructure as it ties to your organization and team structure in a single location.

APIs and detailed billing data make it easy for companies to get incredibly granular data about their cloud cost and usage. Accessing AWS’ CUR file, for example, can be done with a few clicks and an S3 bucket. So it’s no surprise that the most popular method for viewing cloud costs is extracting cost information into a single view, very often in the form of a massive spreadsheet file.

And therein lies the danger of using this methodology for managing cloud costs. A CUR file can contain well over a hundred lines of data for each resource. For a small number of resources, a manual spreadsheet can be manageable. But as your cloud infrastructure grows, it won’t take long before that spreadsheet gets too massive to tackle manually.

Having a single view like a spreadsheet or a simple in-house tool also limits your options for viewing your cost data, sharing it with others and taking decisive action. Even something as relatively straight-forward as showing a team the specific cloud costs they incur becomes tricky when it’s all in one massive spreadsheet, and trying to view the data from multiple angles to uncover waste or optimization opportunities becomes almost impossible.

Who should read this book?

The authors recommend  that everyone involved in a cloud project read this book. This include those undergoing their first cloud transformation projects, through early adopters in “born-in-the-cloud” companies”.

This book is a product of hands-on technical experience with managing large-scale cloud environments, and the operational experience gained from collaborating with various CCoE (Cloud Center of Excellence) units of large and global enterprises. This approach represents a new and unique combination of technical and operationally proven experience that will provide value to readers from all teams: DevOps engineers, IT operations, cloud and software architects, developers, QA engineers, product managers, CCoE team members, procurement, finance, business analysts, and others.

What will you learn?

  • The Amazon Compute (EC2, Lambda, Container Services), Storage (S3, Glacier, EBS, and EFS), and Networking services pricing models.
  • Best practices for architecting and operating your cloud environments for cost optimization and efficiency
  • How to build applications that are lightweight from the perspective of resource requirements
  • How to leverage AWS operational services (Service Catalog, Config, Budgets, Landing Zone, Tagging, CloudWatch, and others) for ensuring continuous governance and on-going cost efficiency

Conclusion

Cloud cost optimization is a complex topic which requires an understanding of cloud architecture, business objectives and cloud cost management best practices. If you want to learn more about these topics to cut cloud costs and drive revenue, read Mastering AWS Cost Optimization: Real-World Technical and Operational Cost-Saving Best Practices.

You can buy the paperback and Kindle editions of the book on Amazon.

Cloud costs can sometimes be difficult to estimate, due to the complexity of the cloud infrastructure. If you have any questions about how we can help you optimize your cloud costs and performance, contact us today to help you out with your performance and security needs.

The post Mastering AWS Cost Optimization: Real-World Technical and Operational Cost-Saving Best Practices appeared first on GlobalDots - CDN, Security and Performance Solutions.

AWS Network Performance Management Problems and Solutions

$
0
0

Cloud based online services and applications face the challenge of optimizing end user experience in the face of an ever growing bandwidth demand. This is further complicated by network related aspects of application performance like latency, congestion and jitter which are inherent to the architecture of the internet.

These issues can have a very negative impact on end user experience and can vary widely depending on the customer’s geographical location or the ISP/network that they use. End users on the US East Coast may have a great experience, while those in Europe see long latencies and very slow performance of your Web shop or application. Similarly, end users at a specific ISP may experience a very responsive service, whereas their neighbor down the street is confronted with a very sluggish web site with long page loading times due to slow delivery of ads in the pages.

In this article we discuss AWS network performance management  problems and solutions.

aws network performance

Key problems in AWS network management

1. Business problem

Network performance is essentially a black box for online service providers where they have little to no visibility into performance metrics like latency and congestion. As of today (Net) DevOps personnel have to manually diagnose network performance issues and redirect network traffic to avoid these problems. This is not an exact science and is mostly reactive in nature. Also putting in place the hardware capabilities to optimize the network related aspects of cloud application performance is costly and complex. The absence of any cost effective and automated tools to improve these performance metrics, in any meaningful way, adds to the problems.

The business impact however is very real:

  • Delayed Ad serving, bidding for Adtech
  • Lower conversion rates in eCommerce
  • Customer support issues
  • Lower return rates
  • Increased churn rates
  • Higher bandwidth costs
  • Business/Service competitiveness
  • Losing out on potential sales
  • End user quality of service suffers

Network providers have a vested interest in BGP route selection. Not all routes through the internet cost the same. BGP route selections are often influenced by business interests of network providers and their wish to control next hop selection.

ISPs often choose to route traffic though network paths that have the most financial benefit for them and not based on network performance metrics. There have been documented cases of large ISPs intentionally creating congestion in some network nodes to charge service provider’s premium rates for non-congested paths. Basically what they do is create a lesser version of the internet to be able to charge bigger bucks for the internet as usual.

2. Technical problem

The internet is a huge mesh of complex interconnected networks. It utilizes two groups of routing protocols to determine the path of traffic through the various networks. Interior Gateway Protocols (IGPs) for intradomain routing, and BGP for interdomain routing between Autonomous System (AS) organizations. One way to understand network performance issues is to look at the way in which internet traffic
is routed by the BGP.

BGP serves as the standardized routing protocol of the internet. It was designed in the early days of the internet with a focus on network reachability and stability, however it is not very smart when it comes to routing traffic to optimize performance related metrics like latency, congestion and packet loss. In addition, it has become very hard to analyze, manage and troubleshoot with the explosive growth of the
internet.

BGP works by exchanging routing and reachability information between autonomous systems on the internet. BGP makes routing decisions based on a number of metrics including reachability and AS_PATH attribute. This basically translates into choosing network routes which are reachable and have the lowest number of AS hops. BGP does not have the capacity to evaluate different network routes based on their
performance metrics like latency, packet loss, congestion and packet loss. Therefore, these crucial performance related metrics are left out when making routing decisions. As a result, network traffic often suffers from high latency, congestion and packet loss.

GlobalDots’ solution

GlobalDots probes upwards of 600,000 network prefixes in real time and collects performance data of every path through the network. This data is then processed and analyzed to determine the best path through the network with the lowest latency, congestion, bandwidth cost and packet loss. (Net) DevOps have the ability to create rules to automate network traffic routing and in the process minimize network
performance issues.

The Cloud/AWS deployment places the GlobalDots appliance between the virtual infrastructure and the transit providers. The connection to the virtual infrastructure is a physical connect (i.e. AWS DirectConnect 1Gbase-LX or 10Gbase-LR). Each customer is connected to the GlobalDots premises using a unique VLAN identifier.

Within the GlobalDots premises, there exists a virtual routing table for each application that reflects the specific performance requirements of the customers. GlobalDots collects RIB (Routing Information Base) data from routeviews.org. RIB represents a special type of database which stores routing information received by every BGP speaker from other peers.

aws network performance management

Next GlobalDots probes all prefixes in the RIB for specific metrics like latency, congestion, packet loss and bandwidth cost. All this performance data is processed and analyzed by a spark cluster and an optimized routing policy is generated for specific metrics based on customer requirements that have been indicated. The optimized routing policy is generated by matching the performance attributes of each destination network (prefix) in the network to the customers’ requirements.

Once the prefixes match the performance requirements of the customers they can opt to override the best path selection of the Border Gateway Protocol. Detailed analytic reports are generated and communicated to the customer through the front end dashboard and provide end to end visibility into network performance.

Benefits include:

  • Network route optimization
  • Latency optimization
  • Bandwidth cost optimization
  • Avoid network congestion
  • Reduce packet loss
  • Real-time network analysis

Conclusion

Managing AWS network performance has specific problems you need to solve if you want to get the most out of it. If you have any questions about how we can help you optimize your cloud costs and performance, contact us today to help you out with your performance and security needs.

The post AWS Network Performance Management Problems and Solutions appeared first on GlobalDots - CDN, Security and Performance Solutions.

AWS Launches Amazon Personalize to Bring ML Technology to its Customers

$
0
0

Amazon Web Services, an Amazon.com company, announced the general availability of Amazon Personalize, bringing the same machine learning technology used by Amazon.com to AWS customers for use in their applications – with no machine learning experience required.

Amazon Personalize makes it easy to develop applications with a wide array of personalization use cases, including specific product recommendations, individualized search results, and customized direct marketing. Amazon Personalize is a fully managed service that trains, tunes, and deploys custom, private machine learning models.

Amazon Personalize provisions the necessary infrastructure and manages the entire machine learning pipeline, including processing the data, identifying features, selecting algorithms, and training, optimizing, and hosting the results. Customers receive results via an Application Programming Interface (API) and only pay for what they use, with no minimum fees or upfront commitments.

Read more: Help Net Security

The post AWS Launches Amazon Personalize to Bring ML Technology to its Customers appeared first on GlobalDots - CDN, Security and Performance Solutions.

Vendor Revenue in the Worldwide Server Market Increased to $19.8 Billion in Q1 2019

$
0
0

Vendor revenue in the worldwide server market increased 4.4% year over year to $19.8 billion during the first quarter of 2019 (1Q19). Worldwide server shipments declined 5.1% year over year to just under 2.6 million units in 1Q19, according to the IDC Worldwide Quarterly Server Tracker.

The overall server market slowed in 1Q19 after experiencing six consecutive quarters of double-digit revenue growth although pockets of robust growth remain. Volume server revenue increased by 4.2% to $16.7 billion, while midrange server revenue grew 30.2% to $2.1 billion. High-end systems contracted steeply for a second consecutive quarter, declining 24.7% year over year to $976 million.

The number 1 position in the worldwide server market during 1Q19 was Dell Technologies with 20.2% revenue share, followed by HPE/New H3C Group, with 17.8% revenue share. Dell Technologies grew revenues 8.9% year over year while HPE/New H3C Group increased revenues 0.2%.

Read more: Help Net Security

The post Vendor Revenue in the Worldwide Server Market Increased to $19.8 Billion in Q1 2019 appeared first on GlobalDots - CDN, Security and Performance Solutions.

Annual Spend on Mobile Edge Computing will Reach $11.2 Billion by 2024

$
0
0

Total annual spend on Mobile Edge Computing (the collection and analysis of data at the source of generation, at the Edge of the network, instead of a centralised location such as the cloud), will reach $11.2 billion by 2024, according to Juniper Research.

This is up from an estimated $1.3 billion in 2019, with an average annual growth of 52.9%.

Juniper Research ranked leading players in the Edge Processing sector by a range of factors, such as the depth of their experience in IoT, their geographical footprint, along with the number, and type, of industries served.

The top five players are:

  • Siemens
  • Bosch
  • AWS
  • VMware

Read more: Help Net Security

The post Annual Spend on Mobile Edge Computing will Reach $11.2 Billion by 2024 appeared first on GlobalDots - CDN, Security and Performance Solutions.

New Open Source Solution Reduces the Risks Associated with Cloud Deployments

$
0
0

An open source user computer environment (UCE) for the Amazon Cloud, called Galahad, has been launched by the University of Texas at San Antonio (UTSA).

The technology will fight to protect people using desktop applications running on digital platforms such as Amazon Web Services (AWS).

Galahad will leverage nested virtualization, layered sensing and logging to mitigate cloud threats. These layers will allow individual users to host their applications seamlessly and securely within the cloud avoiding both known and unknown threats.

Galahad takes a holistic approach to creating a secure, interactive UCE. The software leverages role-based isolation, attack surface minimization practices, operating system (OS) and application hardening techniques, real-time sensing, and maneuver / deception approaches to reduce the risk associated with cloud deployments.

Read more: Help Net Security

The post New Open Source Solution Reduces the Risks Associated with Cloud Deployments appeared first on GlobalDots - CDN, Security and Performance Solutions.


AWS Launches Amazon Quantum Ledger Database

$
0
0

Amazon Web Services (AWS), an Amazon.com company, announced the general availability of Amazon Quantum Ledger Database (QLDB), a fully managed service that provides a high-performance, immutable, and cryptographically verifiable ledger for applications that need a central, trusted authority to provide a permanent and complete record of transactions across industries like retail, finance, manufacturing, insurance, and human resources.

Amazon QLDB offers familiar database capabilities that make it easy to use, and its document-oriented data model is flexible, enabling customers to store structured and unstructured data in the ledger. There are no upfront commitments to use Amazon QLDB, and customers pay for what they use.

Customers looking to implement blockchain technologies are typically trying to accomplish one of two things. Some customers want the immutable and verifiable record of transactions provided by a ledger, however they also want to allow multiple parties to transact, execute contracts, and share data anonymously, without the need for a trusted central authority (e.g. transferring reward points across a network of vendors, or processing transactions that concern a number of different trade and logistics companies).

Read more: Help Net Security

The post AWS Launches Amazon Quantum Ledger Database appeared first on GlobalDots - CDN, Security and Performance Solutions.

Viewing all 56 articles
Browse latest View live




Latest Images