Google Cloud for Edge Computing – Building enterprise edge applications using Google Cloud



Your edge computing systems might not always be connected to the  internet if you plan it realistically. There are many tools that you can use to manage your edge deployments efficiently. 

Some even allow you to tie them back into the main environment. This is the third blog in the series. We’ll be discussing the role of software and Google Cloud’s solutions for edge computing.

Google offers software

Google Cloud is clear in its role when it comes to edge environments: We treat the edge as our customers’ and partners’ domain. We don’t send remote servers to pick up or preconfigure boxes for synchronization. We provide tools and software to manage and configure all clusters in the Anthos suite.

This includes Google Kubernetes engine and open-source Kubernetes. An Anthos cluster at an edge could be a complete GKE edge installation, or a fleet Raspi clusters. The remote cluster can be managed by consistent or intermittent connectivity as long as it is connected to the attached cluster. Anthos fleets allows Kubernetes clusters to be organized into manageable groups.

These groups are defined by cross-service communication patterns and location. An administrator can also manage the block of clusters.

This approach is different from other cloud providers, who may offer a similar fully-managed experience, but with proprietary hardware which inevitably leads to some lock-in. Google’s focus on the software stack sets the stage for long-term success in managing an edge fleet.

Google partners with vendors to manage the Anthos edge clusters’ hardware and configuration.

Let’s take a look at what Google Cloud has to offer and how they can be used in an edge deployment.

Kubernetes & GKE

Kubernetes: Where do they fit in? In a nutshell, Kubernetes brings convention.

Nature is unpredictable at the edge. Kubernetes provides stability and consistency, and extends familiar control planes and data planes to reach the edge. It allows for predictable operations and immutable containerized deployments.

Cloud service providers and data centers provide predictable environments. Platform managers aren’t used to the instability created by the wider reach of the edge. Platform managers have worked for two decades to avoid instability. Kubernetes thrives within this extended edge ecosystem.

Enterprise often thinks of large k8s clusters that run complex, interdependent microservice workloads. Kubernetes, at its core is a distributed, lightweight system that works well on the edge when there are only a few, focused deployments. Kubernetes improves stability and provides an open-source control plan API. It can also be used as a communications hub or consolidation hub for edge installations that are overloaded with devices. Kubernetes provides a standard container platform for software deployments. Simple redundant pairs of UUCs and Raspi Racks can increase edge availability and normalize how our data centers communicate with each other.


Anthos is a great option. Anthos brings order.

The edge can prove difficult to manage without the right strategy and tools. Although it is common to have multiple cloud providers and data centers, the number of edge surfaces can be in the hundreds of thousands. Anthos provides control, governance, and security at large scale. Anthos overlays a powerful framework that controls everything from your core cloud and data center management systems, to your edge deployments.

Anthos allows remote GKE clusters or Kubernetes clusters to be centrally managed — providing private services for clients specific to each location. These industries are seeing the Anthos edge story develop.

  • Warehouses
  • Retail Stores
  • Manufacturing and Factories
  • Telco and Cable Providers
  • Medical, Science and Research Labs

Anthos Config Management and Policy Control

The technology has advanced exponentially in terms of configuration requirements. These scenarios are where Anthos Config Management and Policy Controller come in handy. They enable platform operations teams to manage large edge resource deployments (fleets), at scale. ACM allows operators to create and enforce consistent configurations across edge installations.

One customer of Google Cloud and a partner plan to deploy three bare-metal servers running Anthos Or attached clusters in an HAC configuration at more than 200 locations. All three will act as master and worker. They plan to manage ACM across the entire fleet to ensure security, policy, configuration and policy at large.

Anthos Fleets

Cluster configurations become more fragmented as more edge clusters are added into your Anthos dashboard. This makes it difficult to manage the clusters. Google Cloud offers fleets to help manage and govern these clusters. Anthos Fleets eliminates the need for enterprises to create their own tooling in order to achieve the level of control they desire. It provides an easy way to group and normalize clusters, as well as a simplified method to manage and administer these clusters. Both Anthos (edge and GKE) clusters can use fleet-based management.

Anthos Service Mesh

Microservices architectures thrive at the edge. Services that are smaller and lighter improve reliability, scalability, fault tolerance, and scalability. They also introduce complexity to traffic management, mesh-telemetry, and security. Anthos service mesh (ASM) is based on the open source Istio and provides a consistent framework to ensure reliable and efficient service management. It offers service operators critical features such as monitoring, logging, and traceability. It allows for zero-trust security implementations. Operators can also control traffic flow between services. These are the features we’ve been imagining for years. Virtualizing services separates networking and applications. It also allows for operations to be separated from development. ASM, ACM, and Policy Controller are powerful tools that can be combined to streamline service delivery and foster agile practices without compromising security.

Pushing the edge to reach the edge

Edge computing is a technology that has been around for some time. However, enterprises are only beginning to realize the benefits of this model. We’ve shown the amazing speed and potential of edge technology throughout this series. Distributing asynchronously and intermittently connected networks of commodity hardware that customers manage and dedicated devices to perform the gruntwork for our cloud VPCs and data centers opens up enormous opportunities for distributed processing.

Enterprises can take advantage of edge by building edge installations that use private services and are resilient to network and hardware failures. Google Cloud provides a complete software stack, including Kubernetes and GKE, Anthos Fleets, Anthos Service Meshs, Anthos Fleets. Anthos Service Mesh, Anthos Config Management, Policy Controller, and Anthos Service Mesh. This allows platform operators to manage remote edge networks from faraway locations.

Leave a Reply

Your email address will not be published.