Vertex AI Streaming Ingestion allows for real-time AI

 

 

Vertex AI Streaming Ingestion allows for real-time AI
Near-real-time predictions are required for many machine learning (ML), use cases such as fraud detection, ad targeting, recommendation engines,  and other areas. These predictions are dependent on having access to the latest data. Even a few seconds delay can make a big difference in performance. It’s not easy to create the infrastructure necessary to support high-throughput updates as well as low-latency retrieval.

Vertex AI Search Engine, and Feature Library will now support real-time streaming ingestion. Streaming Ingestion is a fully managed vector database that allows for vector similarity searches. Items in an index are continuously updated and reflected immediately in similarity search results. Streaming Ingestion can be used to retrieve the most recent feature values and extract real-time data for training.

Digits has taken advantage of Vertex Ai Matching Engine Streaming Ingestion in order to power their product, Boost. This tool saves accountants’ time and automates manual quality control. Digits Boost is now able to provide analysis and features in real-time thanks to Vertex AI Matching Engine Streaming Ingestion. Prior to Matching Engine transactions were classified according to a 24-hour batch schedule.

However, with Matching Engine streaming ingestion, we are able perform near-real time incremental indexing activities such as inserting, updating, or deleting embedded embeddings on existing indexes. This has helped us speed up our process. “Now we can provide immediate feedback to our customers and handle more transactions more quickly,” stated Hannes Hapke (machine learning engineer at Digits).

This blog post explains how these new features improve predictions and allow near-real-time use cases such as recommendations, content personalization and cybersecurity monitoring.

Streaming Ingestion enables real-time AI

Organizations are realizing the potential business benefits of predictive models that use up-to-date information, and more AI applications are being developed. Here are some examples.

Real time recommendations and a marketplace. Mercari has added Streaming Ingestion to their existing Matching Engine product recommendations. This creates a real-time market where users can search products based upon their interests and are updated immediately when new products are added. It will feel like shopping at a farmer’s market in the morning. Fresh food is brought in while you shop.

Mercari’s Matching Engine filtering capabilities and Streaming Ingestion can be combined to determine whether an item should appear in search results. This is based on tags like “online/offline” and “instock/nostock”.
Large-scale personalized streaming: You can create pub-sub channels for any stream of content that is represented with feature vectors. This allows you to select the most valuable content according to each subscriber’s interests.

Matching Engine’s scalability (i.e. it can process millions upon queries per second) means that you can support millions online subscribers to content streaming. You can also serve a wide range of topics that change dynamically because it is highly scalable. Matching Engine’s filtering capabilities allow you to control what content is included by assigning tags like “explicit”, “spam” and other attributes to each object.

Feature Store can be used as a central repository to store and serve the feature vectors of your contents in close real-time.

Monitoring – Content streaming can be used to monitor events and signals from IT infrastructures, IoT devices or manufacturing production lines. You can, for example, extract signals from millions sensors and devices and turn them into feature vectors.

Matching Engine allows you to update in near real-time a list “top 100 devices with defective signals” or “top 100 sensor events that have outliers”

Spam detection: Matching Engine can instantly identify potential attacks from millions upon millions of monitoring points if you’re looking for security threat signatures and spam activity patterns. Security threat identification that relies on batch processing can have significant delays, making the company more vulnerable. Your models can detect threats and spams more quickly with real-time data.

Implementing streaming use cases

Let’s look closer at some of these use cases.

Retailers get real-time advice
Mercari created a feature extraction pipeline using Streaming Ingestion.

To initiate the process, the feature extraction pipeline is called Vertex AIP Pipelines. It is periodically invoked by Cloud Scheduler or Cloud Functions.

Get item information: The pipeline issues an query to retrieve the latest item data from BigQuery.

Extract feature vector The pipeline makes predictions on the data using the word2vec modeling to extract feature vectors.

Update index The pipeline calls Matching engine APIs to to add feature vectors to the Vector Index. Also, the vectors can be saved to Bigtable (and may be replaced by Feature Store in future).

“We were pleasantly surprised by the extremely short latency for index updates when we tested the Matching Engine Streaming Ingestion. Nogami Wakana (a software engineer at Souzoh, a Mercari-group company) stated that they would like to add the functionality to their production service as soon it becomes GA.

This architecture design is also applicable to retail businesses that require real-time product recommendations.

Ad targeting

Real-time features, item matching and the latest information are key to ad recommender systems. Let’s look at how VertexAI can help you build an ad targeting system.

First, generate a list of candidates from the advertisement corpus. This is difficult because you need to generate relevant candidates in milliseconds. Vertex AI Matching engine can be used to generate relevant candidates and perform low-latency vector similarity matches. Streaming Ingestion is also available to keep your index up-to-date.

The next step is to rerank the candidate selection with a machine-learning model in order to ensure you have the right order of ad applicants. To ensure that the model uses the most recent data, you can use Feature Store Streaming ingestion to import the most recent features and use online to serve feature values at low latency to improve precision.

Final optimizations can be applied after reranking ads candidates. You can implement the optimization step using a

GKE Networking Basics – Understanding the basics of networking

 

This article will discuss the networking components of Google Kubernetes engine (GKE). We’ll also explore the many options available. Kubernetes, an open-source platform for managing containerized workloads or services, and GKE, a fully managed environment to run Kubernetes on Google Cloud infrastructure.

IP address

Kubernetes uses IP addresses and ports for communication between various network components. IP addresses are unique addresses that identify different components of the network.

Components

  • Containers – These components are essential for the execution of application processes. A pod can contain one or more containers.
  • Pods – A group of containers that can be physically grouped together. Nodes are assigned to pods.
  • Nodes – Nodes can be described as worker machines within a cluster, which is a collection of nodes. A node runs zero or more pods.

Services

  • ClusterIP – These addresses can be assigned to a particular service.
  • Load Balancer – This balances the internal and external traffic to cluster nodes.
  • Ingress – Loadbalancer that handles HTTP(S).

The IP addresses are assigned by subnets to components and services. Variable length subnet masks (VLSM) are used to create CIDR blocks. The subnet mask determines how many hosts are available on a subnet.

Google Cloud uses 2 n– 4 for the calculation of available hosts. This is not the formula used for on-premise networks.

This is how the flow of IP address assignments looks:

  • Cluster’s VPC network assigns IP addresses to nodes
  • The Node IPv4 block automatically assigns internal load balancer IP addresses. You can specify a range to your Loadbalancers, and then use the loadBalancerIP option for that address.
  • The addresses assigned to pods come from a list of addresses that have been issued to the pods on that particular node. 110 pods are the default maximum allowed per node. This number is multiplied with 2 to allocate an address. (110*2=220). The nearest subnet is then used. /24 is the default. This creates a buffer to allow for scheduling the pods. This limit can be set at creation.
  • Containers share the IP addresses of the Pods that they are running on.
  • The address pool that is reserved for services includes service (Cluster IP), addresses.

You can see an example of how to plan and scope address ranges in the IP addresses for VPC native clusters section.

Domain Naming System

DNS allows for name to IP address resolution. This allows services to automatically create name entries. GKE offers several options.

  • kube-dns – Kubernetes native add-on service. Kube-dns can only be run on a deployment which is accessible via a cluster IP. This service is used by default for DNS queries by pods within a cluster. This document explains how it works.
  • Cloud DNS – This is Google Cloud DNS managed services. This service can be used for managing your cluster DNS. Cloud DNS has some advantages over kube DNS:
    • This reduces the administration of a cluster-hosted DNS Server.
    • Local resolution of DNS for GKE nodes. This is achieved by caching local responses, which allows for both speed and scaleability.
    • Integrates with Google cloud Operations monitoring suite.

Service Directory can also be integrated with GKE or Cloud DNS to manage services through namespaces.

The gke-networking-recipes github repo has some Service Directory examples you can try out for Internal LoadBalancers, ClusterIP, Headless & NodePort.

You can learn more about DNS options in GKE by reading the article DNS on GKE: Everything that you need to know.

Load Balancing

These devices control access and distribute traffic over clutter resources. GKE offers several options:

  • Internal Load Balancers
  • External load balancers

Ingress

They handle HTTP(S), traffic to your cluster. They use the Ingress resource type. This creates an HTTP(S), load balancer for GKE. To ensure that the address does not change, you can assign a static address to the loadbalancer when configuring.

GKE allows you to provision both internal and external Ingress. These guides will show you how to configure GKE.

  • Ingress configuration for internal HTTP(S), load balancing
  • External load balancing

GKE allows you container native load balancing that directs traffic directly towards the pod IP via Network Endpoint groups (NEGs).

Service routing

These are the main points you need to know about this topic.

  • Frontend This exposes your service to clients via a frontend that allows traffic to be accepted based on different rules. This could be either a static IP address or a DNS name.
  • Load Balancing Once traffic has been allowed, the load balancer allocates resources to the requested request according to rules.
  • Backend There are many endpoints that can also be used in GKE.

Operation

GKE offers many ways to design your clusters network.

  • Standard This mode gives the administrator the ability to manage the clusters’ underlying infrastructure. If you require greater control and responsibility, this mode is for you.
  • Autopilot GKE provides and manages the cluster’s underlying infrastructure. This configuration is ready for use and allows you to have some hand-off management.
  • Private cluster This allows only internal IP connections. Clients need to be able to access the internet (e.g. Cloud NAT is a way to provide clients with access to the internet (e.g. for updates).
  • Private Access (Lets your VPC connect with service producer via private Ip addresses. Private-Service Connect, Allows private consumption across VPC networks.

Bringing everything together

Here is a brief, high-level overview.

  • Your cluster assigns IP addresses to different resources
    • Nodes
    • Pods
    • Containers
    • Services
  • These IP addresses are reserved for various resource types. Subnetting allows you to adjust the size of the range to suit your needs. It is a good idea to limit external access to your cluster.
  • By default, pods can communicate with each other across the cluster.
  • A service is required to expose pod-running applications.
  • Services are assigned cluster IPs.
  • You can either use kube-dns for DNS resolution or Google Cloud DNS within your GKE Cluster.
  • External and internal load balancers can both be used with your cluster to distribute traffic and expose applications.
  • Ingress handles HTTP(S). This uses the HTTP(S) loadbalancing service of Google cloud. Ingress can be used to create internal or external configurations.

Jupiter is changing: Reflections on Google’s transformation of its data center network

Modern warehouse-scale computing and cloud computing are based on data center networks. Computing and storage have been transformed by the underlying guarantee that uniform, arbitrary communication can be made among thousands of servers at 100s to 200 Gb/s bandwidth with sub-100us latency.

This model has a simple but profound benefit: adding an incremental storage device or server to a higher-level service results in a proportional increase of service capacity and capabilities. Google’s Jupiter data center network technology allows for this type of scale-out capability to support foundational services such as Search, YouTube and Gmail.

The last eight years have been spent integrating wave division multiplexing and optical circuit switching into Jupiter. Despite decades of wisdom stating otherwise, OCS and our Software Defined Networking architecture (SDN), have enabled us to create new capabilities. OCS allows for incremental network builds using heterogeneous technologies, higher performance, lower latency and power consumption, real-time communication patterns, and zero downtime upgrades.

Jupiter achieves all of this while using 10% less flow completion, improving throughput and incurring 30% less costs, as well as delivering 50% less downtime than other known alternatives. This paper, Jupiter Evolution: Transforming Google’s Datacenter Network through Optical Circuit Switches & Software-Defined Networking, explains how we achieved this feat.

This is a brief overview of the project.

Networks of data centers for Evolving Jupiter

demonstrated in 2015 how Jupiter’s data center networks scaled up to more than 35,000 servers, with 40Gb/s server connectivity. This allowed for more than 1Pb/sec aggregate bandwidth. Jupiter now supports over 6Pb/sec datacenter bandwidth. Three ideas were used to achieve unprecedented performance and scale.

  • Software-Defined Networking – A logically centralized control plane that allows you to program and manage thousands of switches in the data center network.
  • Clos Topology is a non-blocking multistage switching totology made up of smaller radix switches chips that can scale to arbitrarily big networks.
  • Merchant switch silicon is a cost-effective, general-purpose Ethernet switching component for a converged data and storage network.

Jupiter’s architectural approach based on these three pillars supported a major shift in distributed system architecture. It also set the standard for how industry builds and manages data centers networks.

Two main challenges remain for hyperscale data centres. Data center networks must be scaled up to the size of a building, with 40MW or more infrastructure. The servers and storage devices in the building are constantly changing. For example, they can move from 40Gb/s up to 100Gb/s up to 200Gb/s, and now 400Gb/s native interconnects. The data center network must adapt dynamically to keep up with new elements that connect to it.

Clos topologies, as illustrated below, require a spine layer that provides uniform support for all devices. Clos-based data centers require a large spine layer to run at the same speed as the current generation. This was necessary in order to deploy a building-scale network. Clos topologies require all to-allfanout starting at aggregation blocks and ending at the spine. Adding to the spine incrementally would require rewiring of the entire data center. The only way to support faster devices would be to replace the entire spine layer. However, this would not be feasible given the hundreds of racks housing switches and the tens of thousand of fiber pairs that run across the building.

The ideal data center network would support heterogeneous elements of network in a “pay-as-you grow” model. This means that the network can add network elements as needed, and the latest technology will be supported incrementally. It would be able to support the same scale-out model that it supports for storage and servers, and allow incremental increases in network capacity. This will result in native interoperability and increased capacity for all devices.

Second, uniform building-scale bandwidth can be a strength but it becomes limited when you consider that data centers networks are multi-tenant and constantly subject to maintenance and localized faults. One data center network can host hundreds of services, each with its own priority, sensitivity to bandwidth, and latency variation. Serving web search results in real time might require bandwidth allocation and real-time latency guarantees, while a batch job for analytics may need more flexibility to meet short-term bandwidth requirements. This means that the data center network should assign bandwidth and pathing services based upon real-time communication patterns, and application-aware optimization. If 10% of the network capacity must be temporarily removed for an upgrade, that 10% should not necessarily be distributed equally among all tenants. Instead, it should be apportioned according to individual priority and application requirements.

These remaining challenges were difficult to address at first. Data center networks were designed around hierarchical topologies at large physical scale, so that dynamic adaptation and incremental heterogeneity could not be supported. This was broken by introducing Optical Circuit Switching to the Jupiter architecture. An optical circuit switch (depicted below) maps an optical fiber input port to an output port dynamically through two sets of micro-electromechanical systems (MEMS) mirrors that can be rotated in two dimensions to create arbitrary port-to-port mappings.

The insight was that it was possible to create arbitrarily logical topologies in data center networks by inserting an OCS intermediation layer among data center packet switches, as shown below.

This required us to create OCS and native WDM transceivers at scale, manufacturability, and reliability that were unimaginable before. While academic research explored the advantages of optical switches, common wisdom indicated that OCS technology wasn’t commercially viable. We designed and built Apollo OCS over multiple years. This technology is now the foundation for most of our data center networks.

OCS has one major advantage: it does not involve packet routing and header parsing. OCS simply reflect light from one input port to another with great precision and very little loss. Electro-optical conversion is used to generate the light at WDM transceivers, which are already required for data transmission reliably and efficiently through data center buildings. OCS is part of the building infrastructure. It can be used at any data rate or wavelength and doesn’t require upgrades, even as the electrical infrastructure changes from 40Gb/s transmission and encoding speeds of 40Gb/s up to 100Gb/s and 200Gb/s, and beyond.

An OCS layer was used to eliminate the spine layer in our data center networks. Instead, heterogeneous aggregate blocks were connected in a direct mesh. This allowed us to move beyond Clos topologies within the data center. We developed dynamic logical topologies to reflect both application communication patterns and physical capacity. It is now a standard procedure to reconfigure the logical connectivity of switches in our network. This allows us to dynamically change the topology without any application-visible effect. This was achieved by linking down link drains and reconfiguring routing software. We also relied on our OrionSoftware Defined Networking control plan to effortlessly orchestrate thousands independent and dependent operations.

It was a particularly challenging challenge to find the shortest route routing over mesh topologies that could provide the robustness and performance required by our data centre. Clos topologies are known for having side effects such as the fact that there are many paths through the network. However, they all have the same length and link capacities. This means that oblivious packet distribution or Valiant load balancing provides sufficient performance. Our SDN control plane in Jupiter is used to implement dynamic traffic engineering. We use techniques that were pioneered by Google’s WAN: we split traffic between multiple paths, while monitoring link capacity, communication patterns, individual priority, and individual application priorities.

We have combined our efforts to re-architect the Jupiter data center networks, which power Google’s warehouse-scale computing machines, and introduced a few industry firsts.

  • Optical Circuit switches are the interoperability point to build large networks and seamlessly support heterogeneous technologies as well as upgrades and other service requirements.
  • Topologies based on direct mesh for better performance, lower latency and lower power consumption.
  • Traffic engineering and real-time topology to adapt network connectivity and pathing in order to match communication patterns and application priority. All while monitoring and reporting on maintenance and failures.
  • Hitless network upgrades are possible with localized addition/remove capacity. This eliminates the need to do costly and tedious “all services out” upgrades.

Although the technology itself is impressive, our goal is to provide performance, efficiency and reliability that combine to enable the most complex distributed services such as Google Cloud and Google Cloud. Our Jupiter network uses 40% less power, has a 30% lower cost and is 50x more reliable than any other alternative. This, while improving flow completion and throughput by 10%. We are proud to present details about this technological feat today at SIGCOMM and look forward discussing our findings in the community.

Google Cloud enables smarter and greener energy use

 

Energy bills are a rising expense and consumers are currently facing difficult times. The climate crisis isn’t over. But, sustainability is still a top priority for consumers and businesses. 40% of UK’s emissions are from homes, which include electricity, heating, transport, and other energy-related activities. People often don’t have the time or resources necessary to research and test many ways to save energy while simultaneously trying to meet multiple demands. Kaluza has made it our mission, to help people save money while reducing their household emissions.

Born out of OVO Energy back in 2019, Kaluza is a software-as-a-service company that helps to accelerate the shift to a zero carbon world. Our Kaluza Energy Retail product allows energy companies to put their customers in the center of this transformation by giving them real-time insight that can help lower their bills. Kaluza Flex’s advanced algorithms allow you to charge millions of smart devices at the most affordable and sustainable price. Kaluza partners with some of the largest energy and OEM companies in the world, including AGL in Australia, Fiat, Nissan and Chubu in Japan.

Use Google Cloud data to help 2030 carbon negative goal

We want to stop the production of 10,000,000 tons of CO2 by 2030. This will be achieved by reaching 100,000,000 energy users and reducing 50% of our energy retail clients’ costs to serve. That’s just half. We want to dramatically reduce our emissions as we accelerate the energy transition for customers. We are committed to being carbon neutral by 2030, even as the world rushes towards net zero.

However, we cannot reduce what we don’t measure. To track the effect of cloud usage, we have created an internal carbon footprint tool. The technology stack of our company spans multiple cloud estates, making it easy to obtain emissions data from Google Cloud apps – thanks to the carbon footprint solution.

We get half-hourly information about our electricity usage for every process that we run on Google Cloud. This allows us to pinpoint the carbon emissions of each process we run on Google Cloud. These insights helped us create Kaluza’s carbon footprint tool. We use this information to combine data from all our cloud providers and create more effective dashboards which have been invaluable for our data team.

Green Development: Reduce emissions by 97%

Our carbon emissions tool allows us to get down to the details of data. This allows them to identify what is driving their carbon footprint and how they can address it. This is where the fun begins, as better data can translate into real sustainability projects. We have launched two large-scale initiatives so far.

Green Software Development is the first. A Green Development Handbook has been created. It contains best practices and guides that software engineers and developers can use to make their software more sustainable. We were able to combine a number large BigQuery questions into one query at a more convenient time and place, which resulted in a 97% decrease in emissions. This means that we have reduced the amount CO2 by 6kg every time we run this query. This is just one of the many ways we are making a difference.

Cloud infrastructure efficiency can be improved

Our second major initiative is about our cloud infrastructure. One of the most efficient and effective ways to reduce carbon emissions is to choose a cleaner cloud or a cleaner region in which to run workloads. Google Cloud provides carbon data for all regions. This data includes the hourly average carbon-free energy consumption in the location and the grid carbon intensity for the local electricity grid.

We can find cloud waste by digging into data and take corrective action. While many of our workloads must run continuously throughout the day, they don’t all have to run at specific times. This opens up the possibility of optimization. To understand the state and performance of our workloads, we are using data from Google Cloud. Combining this information with the grid’s carbon intensity data, we can identify and reschedule workloads at lower intensity times and have a positive effect on Kaluza’s emissions.

Data to empower people to make an impact

One thing unites many of our sustainability projects: They are bottom-up initiatives that were developed with and by our team. We have emissions data at our disposal so we organize hackathons and Green Development days to encourage action and test new ideas.

Our core mission is to make sustainability accessible and actionable for everyone. We’re now bringing the same idea to our teams. It has been encouraging to hear the feedback. One of our employees stated that he now understands the impact his role has on Kaluza’s sustainability and the future of the planet. Our company is putting sustainability at its core by giving our employees the ability to take climate action. We can also encourage our employees to create stronger solutions for carbon savings by showing them the direct effects of their work.

Making electric cars more sustainable by becoming green power stations

Kaluza offers many opportunities to make a positive impact. One of our sustainability pillars is our internal pledge to reduce carbon emissions and pass these savings onto our energy retail clients. Google Cloud solutions are also being used for other exciting projects such as the first and largest domestic vehicle to grid (V2G), technology deployment that OVO Energy and Nissan is leading.

With V2G, drivers are able to charge their electric cars when there is plenty of renewable energy and then sell it back the grid when there’s not enough. We’re turning millions of cars into batteries by analyzing the grid and vehicle data with Google Cloud. This will help drivers make hundreds of pounds per year, while also making the system more sustainable. This could help reduce peak grid stress by up to 40% in a market like California.

Together, we can power the future of energy

Kaluza uses technology to simplify the energy transition for clients and customers, from homes to cars and everywhere in between. We are excited to continue working with Google Cloud to grow our business and provide new energy solutions. We are determined to be a leader in sustainability and have found a cloud vendor that shares our sustainability goals. We are building a world where net Zero is within everyone’s reach.

How CISOs must adapt their mental models to cloud security

Security leaders often go into the cloud with a lot of tools, practices, and skills. They also have mental models that are based on the premise. This can lead to efficiency and cost problems. It is possible to map their mental models to the cloud.

It is helpful to look at the types of threats that each cybersecurity model is trying to detect or block when trying to understand the differences between cloud and on-premises models.

On-premise threats were traditionally focused on data theft from corporate databases and file storage. These resources are best protected with layers of network, endpoint and sometimes application security controls. The corporate data crown jewels, or “crown jewels”, were not accessible via an API to the outside world. They were stored in publicly-accessible storage buckets. Other threats were also created to disrupt operations and deploy malware for different purposes. These could include outright theft or holding ransom data.

Some threats are specific to the cloud. Bad actors will always try to exploit the cloud’s ubiquitous nature. They scan IP addresses for open storage buckets and internet-exposed compute resources.

Gartner explains that cloud security requires major changes in strategy compared to how we protect on-prem data centres. To protect critical cloud deployments, processes, tools, and architectures must be developed using cloud-native methods. It is important to understand the security responsibilities of your cloud service provider and your company when you start cloud adoption. This will make you less vulnerable to attacks on cloud resources.

Cloud security transformations are a great way to better prepare CISOs for today’s threats, tomorrow and beyond. But they require more than a plan and a few projects. Cybersecurity team leaders and CISOs need to create new mental models to think about security. This will require you to translate your existing security knowledge into cloud realities.

To set the stage for this discussion, let’s define what “cloud native” is. Cloud native architecture is one that makes the most of the flexibility, distributed, scalable and flexible nature of public clouds. Although the term implies that one must be born in the cloud, we are not trying to be exclusive. A better term might be “cloud-focused” which means doing security “the cloudy’ way.

However we define it, adopting cloud is a way to maximize your focus on writing code, creating business value, and keeping your customers happy while taking advantage of cloud-native inherent properties–including security. It is possible to transfer legacy errors, which predate cloud by decades, into future cloud environments by simply lifting-and-shifting your existing security tools and practices to the cloud.

Cloud-native refers to removing layers of infrastructure such as network servers, security appliances, and operating systems. It is about modern tools that are cloud-native and designed for cloud computing. Another way to look at it is that you won’t have to worry about these things as you build code to make your life easier. This is the key to success. Security will follow the DevOps and SRE revolutions in IT.

This thinking can be extended to cloud native security. In this scenario, some of your existing tools are combined with solutions offered by cloud service providers. You can take advantage of cloud-native architecture to protect what’s built in the cloud. We’ve already discussed the differences between targeted threats on-prem and those targeting cloud infrastructure. Here are some other important areas that you should reevaluate when considering a cloud security mental model.

Network security

Some companies treat the cloud like a rented data centre for network security. Many of the traditional methods that worked well for decades on-premise are not suitable for cloud computing.

Concepts like a Demilitarized Zone (DMZ), can be adapted for today’s cloud environments. A modern approach to DMZ could use microsegmentation to control access for identity within context. You have strong control by ensuring that the right identity has access to the right resource in the right context. Even if you make a mistake, microsegmentation is able to limit the breach blast radius.

Cloud native organizations also encourage the use of new approaches to enterprise network security such as BeyondProd. Organizations also benefit from it because they can focus on who and what has access to your services, rather than where the requests originated.

Cloud adoption can have a profound impact on network security, but not all areas will change in the same manner.

Endpoint security

The concept of security endpoints changes in the cloud. It’s like a virtual server. What about containers? What about microservices? Software as a Service cloud model doesn’t have an end point. Users only need to be aware of what happens where along the cloud security path.

This mental model can be helpful: An API can be thought of as a type of endpoint. Cloud APIs can also benefit from some of the security thinking that was developed for endpoints. While the concepts of access security, permissions and privileged access can be transferred, they cannot be used for maintenance of an endpoint operating system.

Insecure agents can pose a risk to their clients even if they have been automated to work on virtual machines in a cloud environment. Example: The Microsoft Azure cross-tenant vulnerability highlighted an entirely new type of risk. It was not even known to many customers.

This is why, among the many endpoint security options, some vanish (such patching operating system for SaaS or PaaS), others survive (such the need to secure privilege access), and still others are transformed.

Response and detection

A move to the cloud will bring changes in the threat landscape and the way you respond to them. It is possible to use on-prem detection technology and methods as a foundation for future developments. It won’t help reduce risk in the way most cloud-first companies will require.

The cloud offers the chance to rethink your security goals, including availability, reliability, confidentiality, integrity, and integrity.

Cloud is distributed, immutable, API-driven and automatically scalable. It also focuses on the identity layer. There are often ephemeral workloads that were created for a specific task. These factors all impact how you manage cloud threat detection and require new detection methods.

Six domains are the best for detecting cloud threats: API, managed services and network. These cover network, identity and compute as well as container infrastructure. These devices also have specific detection mechanisms that allow for API access logs, network traffic captures, and API access logs.

Some approaches are less important than others (e.g. network IDS on encrypted connections), while others can increase in importance (such detecting access anomalies), and others transform (such detecting threats from backplane providers).

Data security

The cloud is changing the way we think about data security.

Cloud adoption will put you on the path to what Google calls “autonomic security .” This means that security has been integrated into all aspects of data lifecycles and is continuously improving. It makes it easier for users to use the cloud, removing them from having a multitude of rules about who, what, when and with which data. It allows you to keep up with ever-changing cyberthreats, business changes, and makes it easier for you to make business decisions quicker.

Like other categories, certain data security methods lose their importance or disappear. For example, manual data classification at the cloud scale. However, some approaches to data security remain important from on-prem and cloud, while others transform (e.g. pervasive encryption with secure key management).

Management of access and identity

Your cloud data center is not the same environment for access and identity management (IAM). Every person and every service in the cloud has their own identity. You want to be able control access.

IAM allows you to centrally manage cloud resources with fine-grained access control. Administrators can give you permission to access specific resources. This gives you complete control over and visibility to centrally manage your cloud resources. IAM provides a single view of security policy across all your organization, regardless of whether you have complex organizational structures or hundreds of workgroups and multiple projects.

You can grant cloud access at fine-grained levels with access management tools. This is far beyond the project-level. You can also create access control policies for resources that are more specific based on attributes such as device security status, IP address and resource type. These policies ensure that appropriate security controls are in effect when accessing cloud resources.

This is where Zero trust plays a strong role. Implicit trust in any one component of a complex interconnected system can pose significant security risks. Trust must be established through multiple mechanisms and continually verified. A zero trust security framework is required to protect cloud-native environments. All users must be authenticated, authorized and validated for security configurations and postures before they are granted access to cloud-based apps and data.

This means that IAM mental model from on-premise security generally survives, but many underlying technologies change dramatically and IAM’s importance in security increases significantly.

Cloud security: Shared destiny for more trust

Cloud is more than just “someone else’s computer.” Trust is a crucial component of your relationship to cloud service providers. Cloud service providers often acknowledge that you share responsibility. This means they provide the infrastructure, but you are responsible for many seemingly complex security tasks.

Google Cloud operates in a shared fate model to manage risk with our customers. It is our responsibility to ensure that our customers are able to deploy securely on the platform. We don’t want to be delineators as to where our responsibility ends. We are there to help you with the best practices for safe migrations to trusted clouds and operation.

As a student, you can improve your data analysis skills

 

You may be a college student and you are preparing to enter the “big boys” job market.

Data analysis is the area that I always return to when I think of high-value areas to focus my technical skills in.

Data analysis is important.

In today’s technology-driven society data in all its forms is increasingly valuable because of the insights it provides. All fields are seeing an exponential increase in the amount of data generated. This is good news for students. You can now learn data analysis to enhance your existing skills in any field, including marketing, computer science, and even music. No matter your background, having the ability to manipulate, process and analyze data will help you get ahead.

What should I consider when learning new skills

It can be daunting to learn new skills and tools in tech, on top of your coursework, jobs, or internships. Trust me, it’s not easy. It’s important that students are strategic and efficient in determining the best resources for learning.

When I learn new software or skills, there are some factors that I consider.

  • What is the estimated cost of this?
  • What time will this take?
  • What relevance does this have to my job prospects and career?

Price
This is something I don’t even pretend to be thinking about. It is essential to know how to manage your finances, especially when you are looking to improve your career.

Time
What about time? That is also a cost. Students value time as much as money. Students have to balance coursework, studying, work, family, extracurriculars, career growth, and sometimes even a job. We are looking for skills that can be learned quickly and that can be done on our own, in our own time.

Applicability
Finally, I want to be capable of learning a skill or tool that is relevant to my job search. This will allow me to list it on my resume and make it more appealing to the types of companies I will be applying for. This kind of self-study is essential for your career advancement. This is why I look for opportunities to learn directly with industry-standard software and other services.

Learning data analysis using Google Cloud

My internship at Google has given me ample opportunities to improve my data analysis skills through Google Cloud services. This blog post focuses on two of these services: BigQuery, and Data Studio.

What’s BigQuery?

BigQuery allows companies to run analytics on large data sets from the cloud. It is also a great place to learn and practice SQL (the language used for analysing data). BigQuery’s “getting started” process is very easy and saves students a lot of time. Instead of installing database software and sourcing data to load it into tables, log in to the BigQuery Sandbox to immediately begin writing SQL queries or copying samples to analyze the data provided by the Google Cloud Public Datasets program. You’ll be able to see the difference for yourself soon! ).

What’s Data Studio?

Data Studio integrates with BigQuery to allow you to visualize data in interactive and customizable tables, dashboards and reports. It can be used to visualize the results from your SQL queries. However, it is also useful for sharing insight with non-technical users.

Data Studio is part of Google Cloud so you don’t need to export processed queries to another tool. Direct connections to BigQuery can be used to visualize data. This saves time and eliminates the need to worry about file compatibility and size.

BigQuery and Data Studio are free to use within the Google Cloud Free Tier. The free tier allows users to store a minimum amount of data (if you wish to upload your own data) and it also processes a set number of queries per month. A BigQuery “sandbox”, which is free, can be created. It doesn’t need a credit card and you don’t have to pay any fees to set it up.

BigQuery and Data Studio are free to use. Let’s now talk about their applicability. BigQuery and Data Studio can be used in many industries today for production workloads. You can search BigQuery and Data Studio on LinkedIn to see what I mean.

Get started with BigQuery or Data Studio

Let’s get on with the business. Let me show you how easy it is to use both these tools. Here’s a quick tutorial to help you get started using BigQuery and Data Studio.

Let’s look at an example situation that BigQuery can solve.

Congratulations! This is a new intern that was recently hired by Pistach.io. Pistach.io insists that new employees are allowed to come in for training programs for the first few weeks. You must show up on-time. Pistach.io’s office is located in New York City. There is no parking available nearby. So you know that New York City’s public bike program has been reinstituted. You have decided to use bikesharing to get to work.

You must arrive on time at work. Here are some key questions to help you answer these questions.

  • What stations are nearby that have bicycles you can use in morning?
  • Is there a drop-off point that is close to the office?
  • Which stations are busiest?

These questions could be answered using a public dataset. BigQuery offers tons of datasets that you can use at no cost. This example uses the New York Citi Bike dataset.

How to get set up

    1. First, create a BigQuery Sandbox. This is basically an environment that you can use to do your work. Follow these steps to set one up: https://cloud.google.com/bigquery/docs/sandbox.
    2. Go to the BigQuery page in the Google Cloud console.
    3. Click +Add Data in the Explorer pane > Pin a Project > Enter the project name.
    4. Type “bigquery-public-data” and click Pin. This project includes all datasets that are available in the public datasets programme.
    5. Expand the bigquery–public-data project to see the underlying data. Scroll down until you find “new_york_citibike”
    6. Click to highlight the data or to expand the citibike_stations/citibike_trips tables. To see the schema and preview of the data, highlight the tables.

Visualize the results
BigQuery’s great feature is Data Studio, which allows you to visualize your results with ease. Just click the Explore Data button on the query results page! This will help you get a better understanding of the query you made.

If you’re interested in trying Data Studio out for yourself, I suggest following this tutorial. It also covers bikeshare trips, but this time it is in Austin, Texas!

Next step

It’s that easy! Google Cloud is simple to use and learn, so you spend less time “getting going” and more time analysing data and creating visualizations. It is easy to see the benefits of using this tool in your professional and personal tech development. There are many ways you can improve your data science skills, such as BigQuery, and help your early career in data science.

Use graphs to make smarter AI with Neo4j or Google Cloud Vertex AI

 

This blog post will show you how to combine two technologies: Google Cloud Vertex AI which is a ML development platform and Neo4j which is a graph database. These technologies can be combined to create and deploy graph-based machine-learning models.

You can find the code that underlies this blog post in a notebook .

Graphs are useful for data science.

Many business problems can be solved by graphs. Graphs are data structures which describe the relationships between data points just as well as the data.

One way to look at graphs is to consider the relationship between verbs and nouns. The nouns (or nodes) are things like people, places, and objects. The verbs or relationships are what connect them. People get to know each other, and things are sent to them. These relationships are powerful.

Graph data is often large and difficult to manage. It can be nearly impossible to use it in traditional machine-learning tasks.

Google Cloud and Neo4j provide scalable, intelligent tools to make the most of graph data. Neo4j Graph Data Science, Google Cloud Vertex AI and Neo4j Graph Data Science make it easy to build AI models from graph data.

Dataset – PaySim Fraud Identification

Machine learning using graphs has many applications. Combating fraud in all forms is one common application. Fake transactions are identified by credit card companies, and insurers face false claims. Lenders also watch out for stolen credentials.

Machine learning and statistics have been used for decades to combat fraud. One common method is to create a classification model based on the individual characteristics of each payment and its users. Data scientists may train an XGBoost model that predicts if a transaction is fraudulent. It uses the transaction amount, date, time, origin and target accounts, and the resulting balances.

These models are susceptible to fraud. Fraudsters can bypass checks that only look at one transaction by channeling transactions through a network. To be successful, a model must understand the relationships between fraudulent transactions, legitimate transaction and actors.

These types of problems are best solved with graph techniques. This example will show you how graphs can be used in this scenario. Next, we will show you how to build an end-to–end pipeline for training a complete model with Neo4J or Vertex AI. We’re using the PaySim dataset from Kaggle, which includes graph features.

Loading Data into Neo4j

We first need to load the data into Neo4j. We’re using AuraDS for this example. AuraDS provides Neo4j Graph Data Science and Ne4j Graph Data Science as managed services on top of GCP. You can sign up for a limited preview right now.

AuraDS is an excellent way to start GCP. The service can be fully managed. All we have to do to set up a Paysim database is to click through a few screens, and then load the dump file.

There are many ways that Neo4j can explore the data once it has been loaded. To run queries, you can use the Python API within a notebook.

With Neo4j, you can create embedded designs

Once you have explored your data set, a common next move is to use the Neo4j Graph Data Science algorithms to create features that encode complex, high-dimensional graph data into values that can be used by tabular machine learning algorithms.

To identify patterns, many users begin with simple graph algorithms. To find disjointed groups of account holders who share common logins, you can look at weakly connected parts. Louvain methods can be used to identify fraud rings that are laundering money. You can use page rank to determine which accounts are the most important. These techniques will require that you know the exact pattern you are looking for.

Neo4j can be used to generate graph embeddings. Graph embeddings reduce complex topological information within your graph into a fixed-length vector. This is where the graph’s related nodes have proximal vectors. If graph topology is important, such as how fraudsters behave and with whom they interact, embeddings will capture it.

Some techniques use the embeddings by themselves. You can use a T-sne plot or compute raw similarity scores to locate clusters visually. Combining your embeddings and Google Cloud Vertex AI is what makes the magic happen. This allows you to train a supervised model.

This creates a 16-dimensional graph embedding with the Fast Random Project method. This nodeSelfInfluence parameter is a neat feature. This allows us to tune how many nodes in the graph have an influence on the embedding.

Once the embedding calculation is complete, we can dump it into a pandas databaseframe, convert it to a CSV, and then push it to a Google Cloud storage bucket so that Vertex AI can use it. These steps are described in the notebook .

Vertex AI Machine Learning

Once we have encoded graph dynamics into vectors we can now use tabular methods in Google Cloud’s Vertex AI to train our machine learning models.

We first pull data from a storage bucket. Then we use this to create a Vertex AI dataset. Once the dataset is created, it’s possible to train a model using it. The notebook will display the results. You can also log in to the GCP console to view the results from the Vertex AI’s GUI.

Console views are great because they include ROC curves as well as the confusion matrix. These are useful in helping to understand how the model performs.

Vertex AI offers useful tools for the deployment of the trained model. You can load the dataset into a VertexAI Feature Store. An endpoint can then be deployed. This endpoint can then be called to compute new predictions. This can be found in the notebook .

Future Work

We quickly realized how much work could be done in this field while working on the notebook. Machine learning with graphs, especially when compared to studying methods for tabular data, is still a new field.

We would like to expand our knowledge in the following areas:

Improved dataset – It’s difficult to share publicly fraud datasets due to privacy concerns. This is why we used the PaySim dataset. This is a synthetic dataset. Our investigation revealed that the generated dataset has very little information. A real dataset will likely have more structure.

We’d love to continue exploring the graph of SEC-EDGAR Form 4 transactions in future research. These forms are used to show trades made by public company officers. We expect the graph to be quite interesting as many of these people are also officers in multiple companies. Workshops are planned for 2022, where participants can work together to explore the data using Vertex AI and Neo4j. You can already find a loader to pull this data into Google BigQuery .

Boosting and Embedding : Graph embeddings such as Fast Random Project duplicate data because sub graphs end-up in every tabular datapoint. XGBoost and other boosting techniques also duplicate data to improve results. Vertex AI uses XGBoost. This means that models in this example have likely too much data duplication. It is possible that we would see better results using other machine learning methods such as neural networks.

Graph Features This example shows how to automatically generate graph features by using embedding. You can also manually create new graph features. These two methods would likely result in richer features.

Retailers must “always be pivoting” These are three steps to keep them going.

 

 

Retailers have been told for years that they need to embrace new technologies, trends and imperatives such as online shopping, mobile apps and omnichannel. Retailers adopted many of these technologies in an effort to grow and stabilize their business. However, they soon realized that there were more options available.

The pandemic, rising social movements, and harsher weather followed. These disruptions were not all bad for retailers, but some retailers were better prepared than others. This revealed a universal truth: adaptability is the key to survival and growth.

The retail environment today presents both new and existing challenges to specialty and department store merchants. 88% of purchases were made in a physical store. It’s now closer to 59% with the rest done online or via other omni-channels.

It can feel like ABP is the mantra of all times: Always be pivoting.

The key question isn’t how to maintain momentum and agility, but how to do this without draining your workforce, inventory, or profits. Now, the pivot is a fact. It is what you do that matters.

It is important to adapt quickly and have the right technology in place that allows for seamless scaling.

They must be able leverage real-time insight and improve customer experiences quickly, online and offline (not to mention the growing hybridization of AR and VR). To create engaging customer and associate experiences, they need to modernize their stores. They must improve operations to allow rapid scaling from full operations to digital-only offerings.

Google Cloud has three essential innovations to help retailers reach these goals: Demand forecasting that uses the power of data analysis and artificial intelligence; enhanced product Discovery to increase conversion across channels; and tools to help create the modern shop experience.

These are just a few of the ways that we can help you pivot.

Pivot point 1 – Harnessing AI and data for demand forecasting using Vertex AI

When it comes to building organizational flexibility, one of the biggest challenges retailers face is managing inventory and the supply chains.

The pandemic has created a global supply chain crisis that is causing unprecedented demand and logistical problems. This crisis has made it more difficult for retailers to assess demand and availability. Even in normal times, inventory mismanagement can lead to a trillion-dollar problem according IHL Group. This is because it costs $634 billion each year in lost sales, and overstocks cost $472 billion in lost revenue due to markdowns.

Optimizing your supply chain can also lead to higher profits. McKinsey estimates that a 10%-20% improvement in the accuracy of retail supply chain forecasting will result in a 5% decrease in inventory costs and a 2 to 3% increase revenue.

Some of the problems associated with demand forecasting are:

  • Low accuracy can lead to excessive inventory and missed sales. This puts pressure on fragile supply chains.
  • Real drivers of product demand cannot be included because it is difficult to model large datasets using traditional methods.
  • Poor accuracy in new product launches, and products with low or intermittent demand.
  • Complex models can be difficult to understand and lead to poor product allocations and low returns on investments on promotions.
  • Different departments may use different methods which can lead to miscommunications and costly reconciliation mistakes.

AI-based demand forecasting techniques are possible. VertexAI Forecast helps retailers maintain greater inventory flexibility through the incorporation of machine learning into existing systems. Vertex AI and machine learning-based forecasting models such as Vertex AI can process large amounts of disparate data and drive analytics to automatically adjust for new information.

These machine learning models allow retailers to not only use historical sales data but also to access close-to-real-time data like marketing campaigns and web actions such as a customer clicking on the “add to Cart” button on a site, local weather forecasts, etc.

Pivot point 2 – Enhanced product discovery via AI-powered search, and recommendations

Customers will look elsewhere if they can’t find the product they need online or in-store. This is a simple but powerful statement that has profound implications.

The research by The Harris Poll, and Google Cloud found that 95% of search results received by consumers were not relevant to their searches over a period of six months. A majority of search results that are not relevant to their search query have been rejected by 85% of consumers. 74% said they would avoid websites they had encountered problems with in the past.

Search abandonment is a problem that causes retailers to lose over $300 billion annually. This happens when a customer searches for a product but doesn’t find it on the retailer’s site. Our product discovery services will help you find the right products for the right customers at the right time. These solutions include:

  • Vision Product Search is like bringing an augmented reality experience from Google Lens into a retailer’s mobile app. Shoppers and associates in retail stores can search for similar products by using images they have taken or found online. They will receive a ranking list of similar items.
  • RecommendationsAI allows retailers to make highly customized recommendations across multiple channels.
  • RetailSearch provides Google-quality results for a retailer’s website and mobile apps.

Google Cloud powers all three. It leverages Google’s advanced understanding and intent of user contexts and intents, using technology to provide seamless shopping experiences for every customer. These combined capabilities allow retailers to reduce abandonment of searches and increase conversions across all their digital properties.

Pivot point #3: Building a modern store

The store is no longer a place to browse and buy. Stores must be flexible and able to adapt to changing conditions. Modern stores must offer multiple functions: mini-fulfillment and returns centers, recommendation engines, shopping destinations, fun places to work and many more.

As retail businesses had to embrace Omnichannel, so too are stores becoming omnichannel centres on their own. They combine the digital and physical in one location. Retailers can make use of physical stores to provide superior customer experiences. This will require greater collaboration and cooperation among stores, digital, as well as tech infrastructure teams. It builds on their agile work together.

It’s all about making our physical spaces more digitally compatible. Google Cloud is a way to help physical locations upgrade their infrastructure and modernize customer-facing and internal applications.

It’s like a new OS being released for your phone. Although it’s the same box, the user experience can be quite different. You can now extend this idea to a digitally enabled shop. The team can create new experiences by simply updating the store’s display, interfaces and tools. This could be for sales displays or fulfillment, as well as employee engagement.

This approach can result in simplified customer experiences and better store associates. Cloud solutions can be used to automate ordering, replenishment and fulfillment for omnichannel orders.

Customers can use similar tools to find personalized products online. This allows them to browse, explore and even create a customized shopping experience.

Technology can maximize the impact of store associates. It can provide them with expertise that drives value-added service and productivity. This will also help to reduce overhead costs. Customers should have frictionless checkout and be able make secure, reliable transactions.

Google Cloud can help retailers transform

Modern tools are essential for retailers to be able to pivot and adapt to changing consumer demands. Every company can become a tech company, we believe. Every decision is data driven. Every store can be both digital and physical simultaneously. Every worker is a tech worker.

Google Cloud helps retailers solve the most difficult problems. Our unique capabilities include the ability to manage large amounts of unstructured information and advanced capabilities in AI/ML. Our solutions and products help retailers to focus on the most important things, such as improving operations and capturing digital and multichannel revenue.

Cloud Computing: 12 Benefits

 

Cloud computing has existed for nearly two decades. Despite the fact that it offers many benefits, including cost-benefits and competitive advantages, large numbers of businesses continue to use it without any realizing them. A study by International Data Group found that 69% of businesses already use cloud technology in some capacity, while 18% plan to adopt cloud computing solutions at some point. According to Dell, companies that invest heavily in big data, cloud and mobility enjoy a 53% higher rate of revenue growth than their competition.

This data shows that a growing number of tech-savvy companies and industry leaders are realizing the many benefits of cloud computing. They are also using the technology to improve their businesses, serve customers better, and increase their overall profit margins.

This all seems to suggest that, given the direction the industry is heading in, it’s never been more important to get your head in cloud.

Cloud computing has been gaining popularity in recent years. It is becoming increasingly difficult for individuals and companies to keep their important programs and data up-to-date on their own computers, due to the rapid increase in data usage that has accompanied the transition into the digital 21st Century. This problem has been solved for almost as long as the internet. However, it has just recently become widely used by businesses.

Cloud computing works on the same principle as web-based email client. Users can access all the features and files without needing to store the majority of the system on their computers. Many people use cloud computing services, even though they don’t realize it. Gmail, Google Drive and TurboTax are all cloud-based apps.

Users send their personal data to the cloud-hosted server, which stores it for later access. These applications are useful for personal use but even more so for businesses who need to be able access large quantities of data via an online connection.

Employees can, for example, access customer information using cloud-based CRM software such Salesforce via their smartphone or tablet at work or on the road. They can also share this information quickly with authorized parties around the globe.

There are still leaders who are hesitant to commit to cloud computing solutions for their companies. We’d like to share 12 benefits of cloud computing with you in a few moments.

  1. Cost Savings
  2. Security
  3. Flexibility
  4. Mobility
  5. Insight
  6. Greater Collaboration
  7. Quality Control
  8. Disaster Recovery
  9. Loss Prevention
  10. Automatic Software Updates
  11. Competitive
  12. Sustainability

  1. Cost savings: If you’re worried about the cost of switching to cloud computing, don’t be. 20% of companies are worried about the initial costs of setting up a cloud-based server. However, when weighing the benefits and drawbacks of cloud computing, it is important to look beyond the initial cost. They also need to consider the ROI.  Cloud computing will allow you to save time and money on project startup by allowing you easy access your company’s data. Cloud-computing services can be paid as you go, so it’s not a concern that they will end up paying for features they don’t need or want. If you don’t use the cloud’s features, you won’t be spending money.
    Pay-as-you go also applies to data storage space required to service your clients and stakeholders. This means you will get the exact amount you need and you won’t be charged extra for space you don’t use. These factors combine to bring lower costs and better returns. Bitglass surveyed half of the CIOs and IT leaders who reported that cloud-based applications resulted in cost savings for 2015.

  2. Security: Many organizations are concerned about security when adopting cloud computing solutions. How can you be sure that files, programs and other data are protected if they’re not kept securely on-site? What’s to stop a cybercriminal accessing your data remotely? Actually, quite a lot.
    One, cloud hosts are responsible for security monitoring. This is a significantly better option than an in-house system where the organization has to split its efforts among a multitude of IT issues. While most businesses aren’t open to the possibility of data theft within their organization, a shockingly large percentage of data thefts happen internally and are committed by employees. It can be safer to keep sensitive information off-site when this is true. This is obviously abstract. Let’s look at some solid statistics.
    RapidScale claims that 94% of businesses experienced an increase in security following switching to the cloud. 91% also claimed that the cloud makes compliance easier for them. This increased security can be attributed to the encryption of data that is transmitted over networks and stored on databases. Encryption makes your information less accessible to hackers and anyone else who is not authorized to see it. Cloud-based services can have different security settings depending on who is using them. Only 9% of cloud users can claim the same. 20% of cloud users claim that disaster recovery is possible in less than four hours.
  3. Flexibility Your business only has a limited amount of time to focus on all its responsibilities. Your current IT solutions will make it difficult for you to focus on customer satisfaction and business goals if you have to pay too much attention to data storage and computer issues. Relying on an outside company to manage your IT infrastructure and hosting will allow you to spend more time on the areas of your business that directly impact your bottom line.
    Cloud hosting offers more flexibility than hosting on a local server. Cloud-based services can provide extra bandwidth instantly and without the need for a costly (and complex) upgrade to your IT infrastructure. The increased freedom and flexibility of a cloud-based service can significantly improve the efficiency of your company. An InformationWeek survey found that 65% of respondents believed that “the ability to rapidly meet business needs” was the main reason a company should migrate to a cloud environment.
  4. MobilityCloud computing provides mobile access to corporate data through smartphones and other devices. This is an excellent way to make sure that everyone is included, especially considering more than 2.6 billion smartphones worldwide today. This feature is great for staff with hectic schedules or those who live far from the office. They can keep up with their clients and coworkers instantly.
    For better work-life balance, the cloud can be used to provide easily accessible information to remote, freelance, and travel sales staff. It’s no surprise that companies with employee satisfaction as a priority are 24% more likely than others to increase cloud usage.

  5. Insight As we continue to move into the digital age it is becoming increasingly clear that “knowledge is Power” is no longer true. Instead, the phrase “Data is Money” is now more accurate: “Data Is Money.” There are valuable, actionable nuggets hidden within the millions of bits and data surrounding your business transactions and business processes. They just need to be discovered and taken care of. It can be difficult to sort through all that data and find the kernels if you don’t have the right cloud computing solution.
    Cloud-based storage solutions often offer cloud analytics, which allows you to see your data from a bird’s eye view. You can track your data and create customized reports to analyze the information across the organization. These insights can help you increase efficiency and create action plans to achieve organizational goals. Sunny Delight, for example, was able to increase its profits by approximately $2 million per year and reduce $195,000 in personnel costs using cloud-based business insight.

  6. Increased collaboration: If your company has more than two employees, you should make collaboration a priority. It is not worth having a team that is not able to work together as a team. Collaboration is made easy by cloud computing. Cloud computing makes collaboration easy and secure. Team members can easily view and share information securely through a cloud-based platform. Cloud-based services offer social spaces that allow employees to collaborate and connect across the organization. This increases interest and engagement. Although collaboration is possible with or without cloud computing, it won’t be as efficient nor easy as with a cloud-computing system.

  7. Quality Control Poor quality reporting is one of the biggest hindrances to a company’s success. All documents can be stored in one location and in one format in a cloud-based platform. You can keep data consistent, avoid human error, and keep track of revisions and updates by having everyone access the same information. However, if information is managed in silos, employees may accidentally save different versions of documents. This can lead to confusion and diluted data.

  8. Disaster Recovery Control is a key factor in determining the success of any business. No matter how well-informed your company is about its processes, there are always things out of your hands. In today’s market, even small amounts of downtime can have a devastating effect. Your services’ downtime can lead to loss of productivity, revenue, or a bad reputation.

    While there are no ways to avoid or anticipate disasters that could harm your company, there are things you can do to speed up your recovery. Cloud-based services offer fast data recovery in all types of emergency situations, including natural disasters and power outages. 20% claim that cloud-based services can provide disaster recovery within four hours while only 9% of noncloud users could do the same. 43% said that they intend to invest in cloud-based disaster recovery strategies.
  9. Prevention of Data Loss: If your company doesn’t invest in cloud computing, all your data will be tied to the computers in which it resides. Although this may seem like a problem it could lead to permanent data loss if the hardware fails. Computer malfunctions can be caused by many factors, including viruses, age-related hardware wear, user error, and simple human error. They can also be lost or stolen, even though they are often misplaced.
    You could lose all your information if you don’t have access to the cloud. A cloud-based server means that all information uploaded to it is safe and accessible from any computer connected to the internet.

  10. Automatic Software Upgrades: For those with a lot of work to do, waiting for system updates to be applied is more frustrating than anything else. Cloud-based software updates themselves and does not require IT departments to manually update the entire organization. This saves IT staff valuable time and money on consulting outside IT. PCWorld reports that half of cloud users cite the cloud’s ability to use fewer IT resources internally as a benefit.

  11. Competitive edge: While cloud computing is becoming more popular, there are still people who prefer to keep things local. While they have the option to do so, it puts them at a disadvantage when compared with others who can access the cloud. You’ll learn more quickly if you adopt a cloud-based solution than your competitors. According to a Verizon survey, 77% believe cloud technology gives them a competitive edge. 16% consider this advantage significant.

  12. Sustainability:Given today’s environmental state, it’s not enough for businesses to simply place a recycle bin in their breakroom. They have to show that they are doing their part to save the environment. Sustainable business solutions must address wastefulness at all levels. Cloud hosting is more eco-friendly and leaves less carbon footprint.

    Cloud infrastructures promote environmental proactivity by powering virtual services instead of physical products and hardware. This reduces paper waste and improves energy efficiency. It also allows employees to access the internet from anywhere. Based on cloud computing and other virtual data options, a Pike Research report forecasts that data center energy consumption will decrease by 31% between 2010 and 2020.

 

Bluestacks Alternatives | Best 5 Bluestacks Alternatives In 2021

Do you want to run android applications on your PC or laptop? The only software you need is an android emulator installed on your desktop. However, smartphones are cheaper nowadays. Still, some people like to run andriod apps on their PC. When we talk about android emulators then the first name that comes to mind is Bluestacks. Bluestacks was the first Android emulator and also the best one. However, people are looking for Bluestacks alternatives and replacements.

Those who use Bluestacks will surely know that every bit of work connected with Bluestack is slow to the core. The main reason for the need for Bluestacks alternatives is:

  • It has now become a memory hogger
  • Less stable
  • Fewer features.

If you’re looking for alternatives for the Bluestacks, then scroll down. In this article, we make a list of the 5 best Bluestacks alternatives.

Best Bluestacks Alternatives 

best bluestacks alternatives

However, there are a lot of Android emulators in the market but most of them are less stable than Bluestacks. Here is a list of the Best Five Bluestacks Alternatives that you can use in 2021.

  1. NoxPlayer
  2. LDPlayer
  3. MEmu Play
  4. Genymotion
  5. Remix OS Player

1/NoxPlayer

The first emulator on the list of best Bluestacks alternatives is NoxPlayer. At first, we will discuss system minimum requirements to run NoxPlayer smoothly:

  • Graphics card – 1 GB or more
  • Processor – at least 2.2 GHz recommended
  • Windows 10/7/8.1/8/XP/Mac
  • RAM – 2GB or more.

NoxPlayer has a user-friendly interface. It offers a lot of features and more stable than Bluestacks. For instance, Gameplay optimizations, controller compatibility, etc. You can also change the Android device’s build prop. It is very complex if you try rooting Bluestacks. But on Nox Player, you simply have to enable a toggle in the settings, and you’re rooted. That’s why it is one of the best alternatives for Bluestacks.

With NoxPlayer, you can play mobile games on a PC with Android 7. Experience more stable and smoother gameplay. Easy to get started.
NoxPlayer also has Google Play installed. You can easily download apps and games. It also has a file explorer. You can install any APK file.

2/LDPlayer

The second Bluestacks alternative in our list is LDPlayer. At first, we will talk about minimum requirements:

  • Intel or AMD CPU Processor x86 / x86_64
  • Windows XP/ 7/ 8/ 8.1/ 10
  • Windows DirectX 11 / Graphics driver with OpenGL 2.0
  • RAM – 2GB
  • Hardware Virtualization Technology enabled in BIOS

LDPlayer is an android emulator that gives high-performance and perfect to run apps and games on your Windows PC. LDPlayer focuses on running games smoothly on PC. Hence, they offer gamer-oriented features such as multi-instances, keyboard/gamepad control, script recorder, etc.

With Android kernel – Android 7.1.2 Nougat, it gives you next-level performance and smoothly runs heavy-loaded games. It is also compatible with Intel, AMD, and Nvidia powered Windows. With the advantage of Virtualization Technology, you can improve the performance from the Settings page. LDPlayer is the best alternative for Bluestacks if your main priority is gaming.

3/MEmu Play – Best Bluestacks Alternatives

Another Bluestacks alternative on our list is the MEmu play that has good performance and stability. At first, it’s minimum requirements:

  • Processor (Intel or AMD CPU) – 2 cores x86/x86_64 
  • WinXP / 7 / 8 / 10 
  • DirectX 11 / Graphics driver with OpenGL 2.0.
  • Intel VT-x/AMD-V in BIOS – Enable
  • RAM – 2GB / 4GB for x64 system

MEmu supports both AMD and Intel chipsets. Many android emulators in the market don’t have this feature, including Bluestacks. With MEmu Play, smoothly run multiple instances at the same time. You can use it for both gaming and run any android apps you want. MEmu is free to download and has a lot of features to use for free. It also supports up to Android 7.1 (x64). MEmu is very well optimized for gaming than Bluestacks.

4/ Genymotion – Best Bluestacks Alternatives

Genymotion Minimum Requirements For Smooth Running:

  • RAM – 2GB
  • Free hard disk space – 8 GB.
  • 64 bit CPU with VT-x or AMD-V support.
  • Windows 7, 8, 8.1, 10 (32 bit and 64 bit)
  • Apple Mac OS X version 10.8 or above

Genymotion is a user-friendly Android emulator that offers an open chance to test Beta apps to all Android developers. You can also set RAM and internal storage for relevant devices. Genymotion is best for Android developers and runs on both desktop and cloud through a web browser. You can also use it on any platform such as Windows, macOS, and Linux. From Android 2.3 to Nougat 7.0, Genymotion runs smoothly, as it is powered by OpenGL 2.0 Technology. If you want to install the Google Play Store then you can choose it from the GApps package.

However, Genymotion is not good if you want to play heavy games. You can’t install games like PUBG and Call of Duty. But if you want to test apps on multiple Android environments, Genymotion is for you.

5/ Remix OS Player

Remix OS Player Requirements:

  • Ram – 4GB
  • Core i3 (Recommend Core i5 or Core i7).
  • Windows 7 (64-bit) or latest.
  • Internet access.
  • 8GB Storage.

Remix OS Player based on Android 6.0 Marshmallow. If you don’t mind installing a new OS on your device, then go with Remix OS Player. Remix OS Player has a ton of features such as button mapping, native Google Play support, manual settings for signal strength, network type, location, battery, and has more stability than Bluestack. Remix OS Player is the best for high configure PC and the most updated player in the market. It also runs in low specs PC. Remix OS Player is a multi-tasker and allows you to run up multiple applications smoothly.

Oher Best Bluestacks Alternatives

List of other five best Bluestacks Alternatives:

  1. KoPlayer
  2. Android-x86
  3. Andy Android Emulator
  4. YouWave Android Emulator
  5. Gameloop

Final Thoughts…

Graphic Card For Emulator?

Most emulators need a graphic card for better function, while some work well without a graphics card. It depends on the kind of apps and games you want to run with the emulator. Nox Player or MEmu Play is best if you have a low-end PC.

Best Emulator For PUBG, Free Fire, or Call Of Duty?

LDPlayer is the most optimized emulator for gaming. For games like PUBG, Free Fire, and Call Of Duty – Tencent’s Gameloop gives you the best experience than any other emulator.

Need Of Virtualization Enable

For better performance, you should enable virtualization. After enabling the virtualization, you will notice that the performance of the emulator is increased.

Are you a developer looking to test apps or a casual user looking to play some games? You will get the best emulator according to your need from the above list. Here is the list of the best Bluestacks alternatives. Try one of them according to your need and PC specs. Let us know in the comments box, which one is your favorite Android emulator?