Contents hide

Cloud Blog

Cloud Blog

How Google Cloud is helping U.S public sector agencies during the COVID-19 pandemic and beyond 3 Apr 2020, 5:00 pm

In light of the current situation with COVID-19, public sector agencies are turning to the cloud more than ever, particularly as people increasingly need access to information and government services. Today, we’re happy to share more on our efforts to help governments with their COVID-19 needs and also highlight a few recent agreements with state and local agencies. 

Supporting government agencies through COVID-19

As COVID-19 continues to evolve, we are working with governments at all levels to help them connect with people and identify ways that data-driven research can help counter this global pandemic. 

Around the world, we are working with governments on projects such as developing AI-based chat technology to help overtasked agencies respond more quickly to citizen requests; bolstering government websites that get critical information to the public with free content delivery network (known as “Cloud CDN”) and load-balancing services; providing services and tools to track the spread of the virus; and assisting schools with distance-learning programs. 

In the U.S., we’re working with state agencies like the Oklahoma State Department of Health on solutions for medical staff to engage remotely with at-risk people who may have been exposed to the coronavirus. Within 48 hours, the department deployed an app that allowed medical staff to follow up directly with people who reported symptoms and direct affected people to testing sites. We worked with our partnerMTX Group to create the app and are now deploying it with governments in Florida, New York, and many other states so they can use our tools for insights into how the virus’s spread is affecting people and state healthcare systems.

In Colorado, Eagle County began using Google Maps Platform and G Suite to redesign its emergency operations center and improve the delivery of vital crisis information to Colorado residents. In addition, Georgia Department of Human Services Eligibility and Child Support workers are leveraging our technology to remotely access critical apps and data, and Georgia Department of Community Supervision is using Google Meet for teleconferencing. 

Finally, around the country, we’re also supporting remote learning for public schools and universities. For example, we rolled out the largest go-live event in G Suite history—1.3 million accounts for students in New York City—so they can continue their school year virtually at home. You can learn more about our workhere.

Momentum with government agencies

New Chicago DOT website helps inform citizens and alleviate traffic
The Chicago Department of Transportation (CDOT) partnered with Google Cloud to develop a unique application that uses Google Maps and other technologies to ingest multiple data sources and display them on CDOT’s ChiStreetWork website. This site informs Chicagoans on everything from when and where special events are taking place, to how road repairs and construction projects are impacting traffic patterns for their daily commutes.

Using a Google Maps interface, residents and visitors can subscribe to a targeted area, like their neighborhoods or workplaces, and define what public works and event information they’d like to receive (and at what frequency). As a result, the CDOT has been able to cut down on calls made to its office, freeing up resources and providing greater transparency to its citizens.

“I would listen to the calls coming into the office, and citizens just weren’t aware of what was going on. Someone would have a block party and then water management would arrive to dig up the street,” said CDOT Deputy Commissioner Michael Simon. “ChiStreetWork is more user-friendly and coordinates all events and work projects. The new subscription feature makes it easy for residents to get the information they need, and it provides an unprecedented level of visibility.”

District of Columbia's Air National Guard transforms cybersecurity training in the cloud 
The District of Columbia's Air National Guard (DCANG) recently began using Google Cloud’s Compute Engine to keep its Airmen's cybersecurity skills razor sharp. For one weekend a month when Airmen meet for drills on-site, as well as during their personal time, the DCANG’s Communications Flight (CF) rents virtual machines on Google Cloud to run a simulation of the Cyberspace Vulnerability Assessment/Hunter Weapon (CVA/H) system. This system (CVA/H) is used to detect and prevent attacks on technology that powers the Guard's F-16 Falcon fighter jets and C-40 VIP airlift systems.

Google Cloud technology enables Guardsmen to train on systems even when they're away from Joint Base Andrews, the Air National Guard's D.C.-area military facility. Instead of having to acquire more CVA/H systems at a cost of $300,000 a piece, the DCANG (and now other states following its lead) are able to rent CPUs and memory affordably and seamlessly, said Capt. Jason Yee, cyberspace operations officer for the 113th Wing of the Air National Guard. "At any point in time, half of my flight is offsite," he said. "Renting Google Compute Engine is the most affordable and convenient way to ensure my team's cyber skills remain as sharp as possible. This opportunity was possible thanks to CF commander Maj Jeramy Thigpen's innovative leadership as well as the knowledge and insight that the Google Cloud Federal team in the local area provided." 

A commitment to public sector

We’re continuing to ramp up our capabilities to better serve the public sector through new, dedicated sales and engineering teams in the United States and around the world. We’re also advancing our capabilities in security certifications, such as our FedRAMP High announcement. In addition, we’re pursuing a facility security clearance that will allow us to further assist government agencies in their digital transformation efforts. Read more about our work in the public sector here.

Connecting to Google Cloud: your networking options explained 3 Apr 2020, 4:00 pm

So, your organization recently decided to adopt Google Cloud. Now you just need to decide how you’re going to connect your applications to it... Public IP addresses, or VPN? Via an interconnect or through peering? Should you want to go the interconnect route, should it be direct or through a partner? Likewise, for peering, should you go direct or through a carrier? When it comes to connecting to Google Cloud, there’s no lack of options. 

The answer to these questions, of course, lies in your applications and business requirements. Here on the Solutions Architecture team, we find that you can assess those requirements by answering three simple questions:

  1. Do any of your on-prem servers or user computers with private addressing need to connect to Google Cloud resources with private addressing? 

  2. Do the bandwidth and performance of your current connection to Google services currently meet your business requirements? 

  3. Do you already have, or are you willing to install and manage, access and routing equipment in one of Google’s point of presence (POP) locations?

Depending on your answers, Google Cloud provides a wide assortment of network connectivity options to meet your needs, using either public networks, peering, or interconnect technologies. Here’s the decision flowchart that walks you through each of the three questions, and the best associated GCP connectivity option.

1 Deciding how to connect to Google Cloud.jpg
Deciding how to connect to Google Cloud

Public network connectivity

By far the simplest connectivity option to connect your environment to Google Cloud is simply to use a standard internet connection that you already have, assuming it meets your bandwidth needs. If so, you can connect to Google Cloud over the internet in two ways.    

A: Cloud VPN

If you need private-to-private connectivity (Yes on 1) and your internet connection meets your business requirements (Yes on 2), then building a Cloud VPN is your best bet. This configuration allows users to access private RFC1918 addresses on resources in your VPC from on-prem computers also using private RFC1918 addresses. This traffic flows through the VPN tunnel. High availability VPN offers the best SLA in the industry, with a guaranteed uptime of 99.99%.

2 Cloud VPN connection setup.jpg
A Cloud VPN connection setup between the example.com network and your VPC.

B: Public IP addresses

If you don’t need private access (No on 1) and your Internet connection is meeting your business requirements (Yes on 2), then you can simply use public IP addresses to connect to Google services, including G Suite, Google APIs, and any Cloud resources you have deployed via their public IP address. Of course, regardless of the connectivity option you chose, it is a best practice to always encrypt your data at rest as well as in transit. You can also bring your own IP addresses to Google’s network across all regions to minimize downtime during migration and reduce your networking infrastructure cost. After you bring your own IPs, GCP advertises them globally to all peers.

Peering 

If you don’t need RFC1918-to-RFC1918 private address connectivity and your current connection to Google Cloud isn’t performing well, then peering may be your best connectivity option. Conceptually, peering gets your network as close as possible to Google Cloud public IP addresses. 

Peering has several technical requirements that your company must meet to be considered for the program. If your company meets the requirements, you will first need to register your interest to peer and then choose between one of two options. 

C: Direct Peering

Direct Peering is a good option if you already have a footprint in one of Google’s POPs—or you’re willing to lease co-location space and install and support routing equipment. In this configuration, you run BGP over a link to exchange network routes. All traffic destined to Google rides over this new link, while traffic to other sites on the internet rides your regular internet connection.

3 Direct Peering.jpg
Direct Peering allows you to establish a direct peering connection between your business network and Google's edge network and exchange high-throughput cloud traffic.

D: Carrier Peering

If installing equipment isn’t an option or you would prefer to work with a service provider partner as an intermediary to peer with Google, then Carrier Peering is the way to go. In this configuration, you connect to Google via a new link connection that you install to a partner carrier that is already connected to the Google network itself. You will run BGP or use static routing over that link. All traffic destined to Google rides over this new link. Traffic to other sites on the internet rides your regular internet connection.

4 carrier peering.jpg
 With carrier peering, traffic flows through an intermediary.

Interconnects

Interconnects are similar to peering in that the connections get your network as close as possible to the Google network. Interconnects are different from peering in that they give you connectivity using private address space into your Google VPC. If you need RFC1918-to-RFC1918 private address connectivity then you’ll need to provision either a dedicated or partner interconnect.  

E: Partner Interconnect

If you need private, high-performance connectivity to Google Cloud, but installing equipment isn’t an option—or you would prefer to work with a service provider partner as an intermediary,  then we recommend you go with a Partner Interconnect. You can find Google Cloud connectivity partners at Cloud Pathfinder by Cloudscene.

5  Partner Interconnect.jpg
Partner Interconnect provides connectivity between your on-premises network and your VPC network through a supported service provider.

The Partner Interconnect option is similar to carrier peering in that you connect to a partner service provider that is directly connected to Google. But because this is an interconnect connection, you also are adding a virtual attachment circuit on top of the physical line to get you your required RFC1918-to-RFC1918 private address connectivity. All traffic destined to your Google VPC rides over this new link. Traffic to other sites on the internet rides your regular internet connection.

F: Dedicated Interconnect

Last but not least, there’s Dedicated Interconnect, which provides you with a private circuit direct to Google. This is a good option if you already have a footprint (or are willing to lease co-lo space and install and support routing equipment) in a Google POP. 

With Dedicated Interconnect, you install a link directly to Google by choosing a 10 Gbps or 100 Gbps pipe. In addition, you provision a virtual attachment circuit over the physical link. You run BGP or use static routing over that link to connect to your VPC. It is this attachment circuit that gives you the RFC1918-to-RFC1918 private address connectivity. All traffic destined to your Google Cloud VPC rides over this new link. Traffic to other sites on the internet rides your regular internet connection.

Sanity check

Now that you have made a decision it’s good to sanity check it against some additional data. This following chart compares each of the six connectivity options against nine different connection characteristics. You can use the chart as a high level reference to understand your choice and compare it to the other options. You should feel comfortable with the service level that your option provides through the data points.

connection chart-01.jpg

Option comparison. (Click to enlarge)

There are lots of different reasons to choose one connectivity option over another. For example, maybe today Cloud VPN would meet your needs today, but your business is growing fast, and an interconnect is in order. Use this chart as a starting point and then reach out to your Google Cloud sales representative, who can discuss your concerns in more detail, and can pull in network specialists and solution architects to help you make the right choice for your business.

Expanding at-home learning with 30 days of training at no cost 2 Apr 2020, 4:00 pm

As more and more people transition to remote work and learning in response to COVID-19, many are looking for ways to continue learning and building their skills while at home. To help, we’re offering our Google Cloud learning resources such as our extensive catalog of training courses, hands-on labs on Qwiklabs, and interactive webinars at no cost for 30 days1, so you can gain hands-on cloud experience no matter where you are. 

On-demand training

There are more than 60 Google Cloud training courses available on-demand, and you can now take our most popular learning paths on Pluralsight or Coursera at no cost for 30 days. Our courses are designed around in-demand core cloud roles and skills such as cloud architecture, data engineering, and machine learning. They will prepare you to solve real-world problems, and get you started on your path to certification

Hands-on labs

You can access self-paced labs at no charge on Qwiklabs for 30 days. There are labs available for all skill levels, and many are also offered in multiple languages. With hundreds of labs available on-demand, and new labs added every week, you can learn how to prototype an app, analyze weather patterns, build prediction models, and more—all at your own pace. 

If you’re new to Google Cloud, or unsure of how to use Qwiklabs, we recommend starting with our introductory-level quest, Google Cloud Essentials. The quest contains seven hands-on labs which teach you how to create a virtual machine on Google Compute Engine, deploy a containerized application with Kubernetes Engine, and set up network and HTTP load balancers. 

Webinars

You can also tune into our Cloud Study Jams webinars at no cost, where you can watch Googlers lead hands-on lab demonstrations and answer your questions live on a variety of topics, including machine learning, understanding GCP costs, and more. 

We hope our training resources offer another way to increase learning and personal enrichment in these unprecedented times. Ready to strengthen your cloud skills with Google Cloud training? Register here by April 30 to get started for free.


1. Your 30-days access to Google Cloud training at no cost starts when you enroll for your courses. These offers are valid until April 30, 2020. After your 30-days, you will incur charges on Pluralsight and Coursera; for Qwiklabs, you will need to purchase credits to continue taking labs.

Powering up caching with Memorystore for Memcached 2 Apr 2020, 4:00 pm

In-memory data stores are a fundamental infrastructure for building scalable, high-performance applications. Whether it is building a highly responsive ecommerce website, creating multiplayer games with thousands of users, or doing real-time analysis on data pipelines with millions of events, an in-memory store helps provide low latency and scale for millions of transactions. Redis is a popular in-memory data store for use cases like session stores, gaming leaderboards, stream analytics, API rate limiting, threat detection, and more. Another in-memory data store, open source Memcached, continues to be a very popular choice as a caching layer for databases and is used for its speed and simplicity.

We’re announcing Memorystore for Memcached in beta, a fully managed, highly scalable service that’s compatible with the open source Memcached protocol. We launched Memorystore for Redis in 2018 to let you use the power of open source Redis easily without the burden of management. This announcement brings even more flexibility and choice for your caching layer. 

Highlights of Memorystore for Memcached

Memcached offers a simple but powerful in-memory key value store and is popular as a front-end cache for databases. Using Memcached as a front-end store not only provides an in-memory caching layer for faster query processing, but it can also help save costs by reducing the load on your back-end databases.

Using Memorystore for Memcached provides several important benefits:

  • Memorystore for Memcached is fully open source protocol compatible. If you are migrating applications using self-deployed Memcached or other cloud providers, you can simply migrate your application with zero code changes.

  • Memorystore for Memcached is fully managed. All the common tasks that you spend time on, like deployment, scaling, managing node configuration on the client, setting up monitoring, and patching, are all taken care of. You can focus on building your applications.

  • Right-sizing a cache is a common challenge with distributed caches. The scaling feature of Memorystore for Memcached, along with detailed open source Memcached monitoring metrics, allows you to scale your instance up and down easily to optimize for your cache-hit ratio and price. With Memorystore for Memcached, you can scale your cluster up to 5 TB per instance.

  • Auto-discovery protocol lets clients adapt to changes programmatically, making it easy to deal with changes to the number of nodes during scaling. This drastically reduces manageability overhead and code complexity.

  • You can monitor your Memorystore for Memcached instances with built-in dashboards in the Cloud Console and rich metrics in Cloud Monitoring.

Memorystore for Memcached can be accessed from applications running on Compute Engine, Google Kubernetes Engine (GKE), App Engine Flex, App Engine Standard, and Cloud Functions.

gcp cloud memorystore.jpg

The beta launch is available in major regions across the U.S., Asia, and Europe, and will be available globally soon.

Getting started with Memorystore for Memcached

To get started with Memorystore for Memcached, check out the quick start guide. Sign up for a $300 credit to try Memorystore and the rest of Google Cloud. You can start with the smallest instance and when you’re ready, you can easily scale up to serve performance-intensive applications. Enjoy your exploration of Google Cloud and Memorystore for Memcached.

Filling the NCAA void: Using BigQuery to simulate March Madness 2 Apr 2020, 4:00 pm

As COVID-19 continues to have enormous impact around the world, we’ve focused on supporting customers and making available public data to help research efforts, among various other initiatives. Beyond the essential issues at hand, it’s been a truly strange time for sports fans, with virtually every league shut down across the globe. Even though sports may be non-essential, they are one of our greatest distractions and forms of entertainment.

In particular, the recent American sports calendar has been missing an annual tradition that excites millions: March Madness®. The moniker represents the exciting postseason of college basketball, with both men’s and women’s teams competing to be crowned champions in the annual NCAA® Tournaments. Along with watching these fun, high-stakes games, sports fans fill out brackets to predict who will win in each stage of the tournament.

In our third year as partners with the NCAA, we had planned for a lot of data analysis related to men’s and women’s basketball before the cancellation of all remaining conference tournaments and both NCAA tournaments on March 12. It took us a few days to process a world with no tournament selections, no brackets, no upsets, and no shining moments, but we used Google Cloud tools and our data science skills to make the best of the situation by simulating March Madness.

Simulation is a key tool in the data science toolkit for many forecasting problems. Using Monte Carlo methods, which rely on repeated random sampling from probability distributions, you can model real-world scenarios in science, engineering, finance, and of course, sports. In this post, we’ll demonstrate how to use BigQuery to set up, run, and explore tens of thousands of NCAA basketball bracket simulations. We hope the example code and explanation can serve as inspiration for your own analyses that could use similar techniques. (Or you can skip ahead to play around with thousands of simulated brackets right now on Data Studio.)

Predicting a virtual tournament

In the context of projecting any NCAA Tournament, the first piece necessary is a bracket, which includes which teams make the field and creates the structure for determining who could play whom in each tournament round. The NCAA basketball committees didn’t release 2020 brackets, but we felt pretty good about using the final “projected” brackets from well-known bracketologists as proxies, since games stopped only a couple days short of selections. Specifically, we used bracket projections from Joe Lunardi at ESPN and Jerry Palm at CBS for the men, and Charlie Creme at ESPN and Michelle Smith at the NCAA for the women. These take into account a lot of different factors related to selection, seeding, and bracketing, and are fairly representative of the type of fields we might have seen from the committees.

The next step was finding a way to get win probabilities for any given matchup in a tournament field—i.e., if Team X played Team Y, how likely is it that Team X would win? To estimate these, we used past NCAA Tournament games for training data and created a logistic regression model that took into account three factors for each matchup:

  • The difference between the teams’ seeds. 1-seeds are generally better than 2-seeds, which are better than 3-seeds, and so on, down to 16-seeds.

  • The difference between the teams’ pre-tournament schedule-adjusted net efficiency. Think of these as team performance-based power ratings similar to the popular KenPom or Sagarin ratings, also applied to women’s teams (this post has further details on the calculations).

  • Home-court advantage. This is applicable for early-round women’s games that are often held at a top seed’s home stadium; almost all men’s games are at “neutral” sites.

BigQuery enables us to prepare our data so that each of those predictors is aligned with the results from past games. Then, we used BigQuery ML to create a logistic regression model with minimal code and without having to move our data outside the warehouse. Separate models were created for men’s and women’s tournament games, using the factors mentioned above. The code for the women’s tournament game model is shown here:

Both models had solid accuracy and log loss metrics, with sensible weights on each of the factors. The models then had to be applied to all possible team matchups in the projected 2020 brackets, which were generated along with each team’s seed, adjusted net efficiency, and home-court advantage using BigQuery. Then, we generated predictions from our saved models with BigQuery ML, again with minimal code and from within the data warehouse, as shown here:

The resulting table contains win probabilities for every potential tournament matchup, and sets us up for the real payoff: using the bracket structure to calculate advancement probabilities for each team getting to each round. For first-round matchups where matchups are already set— i.e., 1-seed South Carolina to face 16-seed Jackson State in Charlie Creme’s bracket—this is simply a lookup of the predicted win probability for the matchup in the table. But in later rounds, there’s more to consider: the probability that the team gets there at all, and if they do, that there is more than one possible opponent. For example, a 1-seed could face either the 8- or 9-seed in the Round of 32, the 4-, 5-, 12-, or 13-seed in the Sweet 16, and so on.

So, a team’s chance of advancing out of a given round is the chance they get to that round in the first place, multiplied by a weighted average of win probabilities—their chances of beating each possible opponent they might face, weighted by how likely they are to face them. Consider the example of an 8-seed advancing to the Sweet 16:

  • They are usually something like 50-50 to beat the 9-seed in the Round of 64

  • They are likely a sizable underdog in a potential matchup against a 1-seed

  • They likely have a very good chance of beating the 16-seed if they play them

  • But the 1-seed is the much more likely opponent in the Round of 32, so the lower matchup win probability gets weighted much higher in the advance calculation

Putting it all together, an 8-seed’s projected chance of making the Sweet 16 is usually well below 20%, since they have a (very likely) uphill battle against a top seed to get there.

Running this type of calculation for the entire bracket is naturally iterative. First, we use matchup win probabilities for all possible matchups in a given round to calculate the chances of all teams making it to the next round. Then, we use those chances as weights for each team and possible opponent’s likelihood of meeting in that next round, then repeat the first step using matchup win probabilities for the possible matchups in that round.

Doing this for all tournament rounds might typically be done using tools like Python or R, which requires moving data out of BigQuery and doing calculations in one of those languages, then perhaps writing results back to the database. But this particular problem is a great use case for BigQuery scripting, a feature that allows you to send multiple statements in one request, using variables and control statements (such as loops). This allows similar functionality for iterative scripts like in Python or R, but while still using SQL code and without having to leave the warehouse. In this case, as shown below, we’re using a WHILE loop cycling through each tournament round and outputting each team’s advance probabilities to a specific table that gets referenced back in the script (“[...]” represents code left out for clarity in this case):

We collected the results and put them into this interactive Data Studio report, which lets you filter and sort every tournament team’s chances (in each projected bracket). Our results show Kansas would’ve been title favorites in the men’s bracket, with around a 15% to 16% chance to win it all. Oregon was the most likely women’s champion at either 27% or 31% (depending on projected bracket chosen). Keep in mind that this is NOT saying Kansas or Oregon was going to win—the probabilistic forecasts actually show a 5-in-6 chance of a champion other than the Jayhawks on the men’s side and a greater than 2-in-3 chance of the Ducks not winning the women’s title.

While fun to play around with, these results are not particularly unique. Companies like ESPN, FiveThirtyEight, and TeamRankings have provided probabilistic NCAA Tournament forecasts for years. The probabilities are fairly accurate gauges of each specific team’s chances, but filling out a bracket using the most likely team in each slot ends up looking very chalky—the better seeds almost always advance. “Real” March Madness isn’t exactly like this—it’s only one tournament with 63 slots on the bracket that get filled in with a specific winner. While top seeds and better teams generally advance in aggregate, there are always upsets, Cinderella runs, and unexpected results. 

Simulating thousands of NCAA Tournaments

Fortunately, our procedure for the model and projections accounts for that randomness. To demonstrate this, we can simulate the actual bracket many times and actually look at results. The procedure is similar to the one we used to create the projections, using BigQuery scripting and the matchup win probabilities to loop round-by-round through the tournament. The differences are that we use random number generation to simulate an actual winner for each matchup (based on the win probability), and that we do so across many simulations to generate not just one possible bracket, but thousands of them—true Monte Carlo simulations. See the code below for details (again, “[...]” used as a placeholder for code removed to simplify presentation):

Let this run for a few minutes and we wind up with not just one completed NCAA Tournament bracket per gender, but 20,000 brackets each for men and women (10,000 for each projected bracket we started with). We’ve made all of these brackets available in this interactive Data Studio dashboard, accelerated using BigQuery BI Engine. Use “Pick A Sim #” to flip through many of them, and use the dropdowns up top to filter by gender or starting bracket. Within the bracket, the percentage next to each team is the probability of them making it to that round, given the specific matchup in the previous round (blue represents an expected result, red an upset, and yellow a more 50/50 outcome). You can use “Thru Round” to mimic progressing through each round of the tournament, one at a time.

gcp ncaa bracket.gif

Feel free to go through a few (dozen, hundred, …) simulations until you find the one you like the best...there are some wild ones in there. Check out Men’s Lunardi bracket simulation 108, where Boston University (the author’s alma mater) pulls three upsets and makes the Elite Eight as a 16-seed!

gcp ncaa bracket.jpg

Perhaps one upside of having no tournaments is that we can pick a favorable simulation and convince ourselves that if the tournament had taken place, this is how it would've turned out!

Of course, these brackets aren’t just based on random coin flipping, where total chaos brackets are as likely as more plausible ones with fewer upsets. BU doesn’t get to the Final Four in any simulated bracket (though we could use the easy scalability of BigQuery to run more simulations), while the top seeds get there much more often. The simulations reflect accurate advancement chances for each matchup based on the modeling described above, so the resulting corpus of brackets reflect the proper amount of madness that typifies college basketball in March. Capturing the randomness appropriately is a good general point to keep in mind when creating these types of simulations to help solve non-basketball data science problems.

With the lack of actual national semifinals and title games going on over the next couple days, we hope the ability to play with thousands of simulated Final Fours provides some small bit of consolation to those of you missing the NCAA basketball tournaments in 2020. And you can check out our Medium NCAA blog for all of our past basketball data analysis using Google Cloud. Here’s to hoping that we’ll be watching and celebrating the real March Madness in future years.

Building your first Google Hangouts Chatbot in Apps Script 1 Apr 2020, 4:00 pm

Learning to build chatbots, with all the available approaches and technologies, can seem daunting. Similarly, building Google Hangouts chatbots can require some early decisions on server architectures, technical implementations, and even programming languages. You could, for example, build Google Hangouts chatbots using a variety of different technologies including Cloud Functions, HTTP web services, Cloud Pub/Sub, and Webhooks, to name a few. 

Fortunately for those who are in the early stages of learning to build bots for Google Hangouts, the “low code” Google Apps Script environment provides an easy path to get started. Also, because Apps Script offers native G Suite integration (including authentication), it can be the most pragmatic choice for building G Suite-centric chatbots on Google Hangouts.

Here’s a step-by-step guide on how to build your first Google Hangouts chatbot using Apps Script.

What is Apps Script?

Apps Script is a cloud-based scripting language and runtime environment based on JavaScript. It offers direct code access to a variety of Google products and APIs via its extensive library of services.

Apps Script is typically used to enhance the functionality of G Suite products (Google Sheets, Docs, Slides, Drive, and Gmail) by offering a streamlined “low code” approach. For example, to send an email in Apps Script, you can simply use:

MailApp.SendMail(“recipient@acme.com”,“My Subject”,“Hello from Apps Script!”);

Apps Script code development can also be done entirely in a browser, so there’s no need to install a local software development environment (for more on Apps Script, check out this Apps Script Overview).

Pre-reqs

Before diving in to this guide, please make sure you meet the following prerequisites:

  • You can create projects on Google Cloud Platform (GCP).

  • You’ve got a basic familiarity with Apps Script and/or JavaScript.

  • You’ve got G Suite Admin authority (you’ll need this for domain-wide publishing—for the simple testing described in this walkthrough, however, G Suite Admin authority isn’t needed)..

Now that you’re ready to get started, here are the basic steps to build and test your own chatbot in Apps Script.

Step 1: Create and configure your Apps Script project

Go to script.google.com, click the “Create Project” button, name your project “Hello ChatBot”, click Save.

To configure the project for Chat:

  • Access the Apps Script project’s Manifest file. View -> Show Manifest File
  • This opens the file appsscript.json (manifest) file in the script editor. You can now customize this file.
  • To add chat capabilities to this project, simply add  ”chat”: { }  to the manifest. For example:

  • Save the project.
  • In the Code.gs file, add the following function to handle the incoming chat message:

  • Save the project.

That’s all the Apps Script code needed to run the chatbot! 

Step 2: Saving your deployment ID 

Our next steps are to associate the bot’s Apps Script project with a GCP project by getting the  Apps Script project’s Deployment ID and saving it for later when configuring a GCP project.

To get a deployment ID from the Manifest:

  • Publish > Deploy from manifest.
  • In the Deployments dialog box next to Latest Version (HEAD), click Get ID.
  • In the Deployment ID dialog box, copy the value listed for the deployment ID. It will look something like: AKfycbyS1N2v-_ZVNldWGBuY4azxodbOE06PTJGXKBU9hV3m
  • Copy and save the Deployment ID for later.
  • Click Close and Close to dismiss the dialog boxes.

Step 3: Create and configure your GCP Project 

Here, you’ll create and configure your GCP project so that it can serve as the chatbot backend.

To create a new GCP project: 

  • Goto https://console.cloud.google.com and create a “New Project” (you can name it “My First Chatbot”).

  • Select your associated Billing account.

  • Accept the defaults for Organization and Location.

  • Click CREATE, and then select the new project in the console.

Next, you need to enable the Chat API in the project

  • From the main GCP Dashboard, click “goto APIs overview” to open the “APIs and Services” Dashboard.

  • Click:

1.jpg

  • Search for ‘‘Hangouts Chat API”. 
  • Once located, click ENABLE.

2.jpg
  • Once enabled, Click Credentials on the left side.
  • On the Credentials screen, click + CREATE CREDENTIALS and select Service account.
3.jpg
  • For your service account name, enter: “My First ChatBot Service Account.” An account id will be generated automatically (my-first-chat-bot-service-accou).
  • You can also add a service account description, but this is optional.
  • Click CREATE.
  • After the account is created, look for the “Service account permissions” section, and, in the Role dropdown menu, select Project -> Owner.
4.jpg

  • Click CONTINUE and DONE.

Finally, you’ll need to configure the Chat API. Click Configuration in the menu on the left, and add the following options:

  • Bot name: enter “My First Bot”.

  • Avatar URL:
    https://www.gstatic.com/images/branding/product/1x/chat_48dp.png
    (You can provide your own publicly accessible image here if you want)

  • Description: ‘A simple first Hello Bot'.

  • Functionality: select Bot works in direct messages.

  • Connection settings: select Apps Script project and paste the Deployment ID from the Apps Script project into the field.

Under Permissions, select Specific people and groups in your domain, and enter your own email (from within the G Suite domain). Then click SAVE. Note that the SAVE button will remain active after saving, but the Bot status at the top of the page will change to “LIVE - available to users.”

5.jpg

Step 4: Test your new bot!

  • In a new browser window, open chat.google.com in the same domain and user specified as a chatbot user that you specified in the previous step.

  • Next to BOTS, click + to add your new bot. Search for your new bot.

6.jpg
  • Once located, click Add (and Message) to start a chat session with the new bot.

  • Enter “Hello!” and see the response!

7.jpg
  • Bonus step! Now that your bot is working, go back to the Apps Script code and add a change to the code so that it translates the message to French. 
    Hint, use: LanguageApp.translate(event.message.text,'en','fr')

  • As you save the project, you’ll notice that you can immediately test the bot with your latest code.

8.jpg

Coming up: More chatbot examples

In the near future we’ll be adding more posts with interesting examples of what you can do with chatbots, such as linking them to APIs and services, and even tapping into Google AI ML platform. In the meantime, check out some examples of bots that are built in to Hangouts Chat in this recent blog post.

Achieving identity and access governance on Google Cloud 1 Apr 2020, 4:00 pm

When businesses shift from solely on-premises deployments to using cloud-based services, identity management can become more complex. This is especially true when it comes to hybrid and multi-cloud identity management.

Cloud Identity and Access Management (IAM) offers several ways to manage identities and roles in Google Cloud. One particularly important identity management task is identity and access governance (IAG): ensuring that your identity and access permissions are managed effectively, securely, and correctly. A major step in achieving IAG is designing an architecture that suits your business needs and also allows you to satisfy your compliance requirements. To manage the entire enterprise identity lifecycle you must consider the following core tasks: 

  • User provisioning and de-provisioning
  • Single sign-on (SSO)
  • Access request and role-based access control (RBAC)
  • Separation of duties (SoD)
  • Reporting and access reviews

In this post, we’ll discuss these tasks to show how you can achieve effective identity and access governance when using Google Cloud.

User provisioning and deprovisioning

Let’s start at the very beginning. Google Cloud offers several ways to onboard users. Cloud Identity is a centralized hub for Google Cloud and G Suite to define, setup, and manage users and groups—think of Cloud Identity as a provisioning and authentication solution, whereas Cloud IAM is principally an authorization solution. Once they’re onboarded, you’ll be able to assign permissions to these users and groups in Google Cloud IAM to allow them access to resources. 

Depending on your specific system of record, there are several scenarios to consider.

If you’re using an on-premises Active Directory or LDAP directory as a centralized identity store
This is the most common pattern for provisioning in enterprises. If your organization has a centralized directory server for provisioning all your users and groups, you can use that as a source of truth for Cloud Identity. Usually an enterprise provisioning solution connects the identities from the source of truth (HRMS or similar systems) to directories, so joiner, mover, and leaver workflows are already in place. 

To integrate an on-prem directory, Google offers a service called Google Cloud Directory Sync, which lets you synchronize users, groups, and other user data from your centralized directory service to Google Cloud domain directory (Cloud Identity uses Google Cloud domain directory). Cloud Directory Sync can synchronize user status, groups, and group memberships. If you do this, you can base your Google Cloud permissions on Active Directory (AD) groups.

You can also run Active Directory in the cloud using a managed Active Directory service. You can use the managed AD service to deploy a standalone domain in multiple regions for your cloud-based workloads or connect your on-premises Active Directory domain to the cloud. This solution is recommended if: 

  • You have complex Windows workloads running in Google Cloud that need tight integration with Active Directory for user and access needs. 

  • You will eventually completely migrate to Google Cloud from your on-premises environment. In this case, this option will require minimal changes to how your existing AD dependencies are configured. 

If you primarily manage the user lifecycle with another identity management solution
In this example, you don’t have a directory as a central hub. Instead you’re using a real-time provisioning solution like Okta, Ping, SailPoint, or others to manage the user lifecycle. These solutions provide a connector-based interface—usually referred to as an “application” or “app”—that uses Cloud Identity and User Management APIs to manage users and group memberships. 

Joiner, mover, and leaver workflows are managed directly from these solutions. The Cloud Identity account is disabled as soon as a termination event is processed by the leaver workflow, as is the user’s access to Google Cloud. In the case of a mover workflow, when users change job responsibility, the change is reflected in their Cloud Identity group membership which defines their new Google Cloud permissions.

If you’re using a home-grown identity management system
Custom, home-grown identity systems are most commonly found when an organization’s complexity can’t be handled by an off-the-shelf product or when an organization wants greater flexibility than a commercial product can provide. In this case, the simplest option is to use a directory. You can interface with Cloud Identity using an LDAP compliant directory system. Users and groups provisioned via your custom identity management system can be synchronized to Cloud Identity using Cloud Directory Sync without having to write a custom provisioning solution for Cloud Identity.

Single sign-on

Single sign-on (SSO) allows you to access applications without re-authenticating or maintaining separate passwords. Authorization usually comes in as a second layer to make sure authenticated users are permitted to access a given resource. As with user provisioning and de-provisioning, how you use SSO depends on your environment:

  • SSO when using G Suite with Google Authentication. In this case, no special changes are required for Google Cloud sign-in. Google Cloud and G Suite both use the same sign-in, so as long as the right access is provisioned, users will be able to sign in to the Google Cloud console using their regular credentials.

  • SSO when using G Suite with a third-party identity management solution. If G Suite sign-on has already been enabled, Google Cloud sign-on will also work. If a new G Suite and Google Cloud domain has been established, then you can create a new SAML 2.0-compliant integration using Cloud Identity with your identity management provider. For example, Okta and OneLogin provide a configurable SAML 2.0 integration using their out-of-the-box app. 

  • SSO when using an on-premises identity solution. Cloud Identity controls provisioning and authentication for Google Cloud, and provides a way to configure a SAML 2.0 compliant integration with your on-premises identity provider. 

  • SSO when using a multi-cloud model. When using multiple cloud service providers, you can use Cloud Identity or invest in a 3P identity provider to have a single source of truth for identities.

Access request and role based access control

For Google Cloud, “project” is the top level entity that hosts resources and workloads. Google Cloud relies on users/groups to define the role memberships that are used to provide access to projects. For easier organization and to maintain separation of control, projects can be grouped into folders and access can be granted at the folder level, but the principle remains the same. There are several roles within Google Cloud based on workloads. For example, if you’re using BigQuery, you’d assign predefined roles like BigQuery Admin, BigQuery Data Editor, or  BigQuery User to users. The best practice is to always assign a role to Google Groups.

Google Groups are synchronized from your directory environment or from your identity management solution into Cloud Identity. Again, think of Cloud Identity as your authentication system and Cloud IAM as your authorization system. These groups can be modeled based on project requirements and then be exposed in your identity management system. They can then be requested by the end user or assigned automatically based on their job requirements using enterprise roles. 

One way to structure your Google Cloud organization to separate workloads is to set up folders that mirror your organization’s business structure and match them to how you grant access to different teams within your organization:

  • A top level of folders reflects your lines of business (LOB)

  • Under a LOB folder you would have folders for departments

  • Under departments you would have folders for teams 

  • Under team folders you would have folders for product environments (e.g., DEV, TEST, STAGING, and PROD)

With this structure in place, you would model Active Directory or identity management provider groups for access control based on this hierarchy, assign them based on roles, or expose them for access request/approval. You should also have “break glass” account request procedures and the pre-approved roles a user could be granted to manage potential emergency situations. 

Organizations that have frequent reorganizations might want to limit folder nesting. Ultimately, you can go as abstract or as deep as you’d like to balance flexibility and security. Let’s look at two examples of how this balance can be achieved.

The figure below shows an example of structuring your Google Cloud organization with a business-unit-based hierarchies approach. The advantage of this structure is that it lets you go as granular as you’d like, however it is  difficult to maintain since it doesn’t support organizational changes like reorganizations.

1 - example granular access-oriented hierarchy.jpg

Next we have an example of an environment-based hierarchies approach to your Google Cloud organization. This structure still lets you implement granular control over your workloads, and it’s also easier to implement using infrastructure-as-a-code (think Terraform).

2 - example granular access-oriented heirarchy.jpg

Separation of duties

Separation of duties (SoD) is a control that’s designed to prevent error or abuse by ensuring that at least two individuals are responsible for a task. Google Cloud provides several options to achieve SoD:

  1. As seen in the previous section, the Google Cloud resource hierarchy lets you create a structure that provides separation based on job responsibilities and organizational position. For example, an operational engineer working in one line of business usually wouldn’t have access to a project in another line of business, or a financial analyst wouldn’t have access to a project that deals with data analysis.

  2. Google Cloud lets you define IAM custom roles, which can simply be a collection of out-of-the-box roles. 

  3. Google Cloud lets you bind roles to groups at various levels in your resource hierarchy. With this powerful feature, a group can be an organization level, a folder level, or a project level based on how the bindings are created.

Here’s an example of how roles can be defined at an organizational level.

3 - example org-level groups.jpg

In the next figure, we define a “Security admin group” and assign the appropriate IAM roles at the Org level.

4 - example security admin group.jpg

Then, along similar lines, you can think of groups that could be defined at a folder or project level.

5 - example folder and project-level groups.jpg

For example, below we define the “Site reliability engineers” group and assign the appropriate IAM roles at the folder or project level.

6 - example site reliability engineers.jpg

Reporting and access reviews

Users can gain access to a project either by having it directly granted to them or from organization- or folder-level inheritance. This can make it a bit unwieldy to meet compliance requirements that require you to have a report of “who has access to what” within Google Cloud. 

While you can get this “master” list using Cloud Asset Manager APIs or gcloud search-all-iam-policy commands, a better option is to export IAM policies to BigQuery using Asset Manager APIs’ export capabilities. Once this data is available in BigQuery, you can analyze it in Data Studio or import it into the tools of your choice.

Putting it all together

Identity and access governance can be a challenging task. After reading this blog post, we hope that you have a clearer understanding of the options you have to address it on Google Cloud. To learn more about IAM, check out the technical documentation and our presentation at Cloud Next ‘19.

Introducing BigQuery column-level security: new fine-grained access controls 1 Apr 2020, 4:00 pm

We’re announcing a key capability to help organizations govern their data in Google Cloud. Our new BigQuery column-level security controls are an important step toward placing policies on data that differentiate between classes. This allows for compliance with regulations that mandate such distinction, such as GDPR or CCPA. 

BigQuery already lets organizations provide controls to data containers, satisfying the principle of “least privilege.” But there is a growing need to separate access to certain classes of data—for example, PHI (patient health information) and PII (personally identifiable information)—so that even if you have access to a table, you are still barred from seeing any sensitive data in that table. 

This is where column-level security can help. With column-level security, you can define the data classes used by your organization. BigQuery column-level security is available as a new policy tag applied to columns in the BigQuery schema pane, and managed in a hierarchical taxonomy in Data Catalog. 

The taxonomy is usually composed of two levels: 

  • A root node, where a data class is defined, and 
  • Leaf nodes, where the policy tag is descriptive of the data type (for example, phone number or mailing address).

The aforementioned abstraction layer lets you manage policies at the root nodes, where the recommended practice is to use those nodes as data classes; and manage/tag individual columns via leaf nodes, where the policy tag is actually the meaning of the content of the column. 

Organizations and teams working in highly regulated industries need to be especially diligent with sensitive data. “BigQuery’s column-level security allows us to simplify sharing data and queries while giving us comfort that highly secure data is only available to those who truly need it,” says Ben Campbell, data architect at Prosper Marketplace.

Here’s how column-level security looks in BigQuery:

bq example.jpg

In the above example, the organization has three broad categories of data sensitivity: restricted, sensitive, and unrestricted. For this specific organization, both PHI and PII are highly restricted, while financial data is sensitive. You will notice that individual info types, such as the ones detectable by Google Cloud Data Loss Prevention (DLP), are in the leaf nodes. This allows you to move a leaf node (or an intermediate node) from a restricted data class to a less sensitive one. If you manage policies on the root nodes, you will not need to re-tag columns to change the policy applied to them. This allows you to reflect changes in regulations or compliance requirements by moving leaf nodes. For example, you can take "Zipcode" from "Unrestricted Data," move it to "PII," and immediately restrict access to such data.

Learn more about BigQuery column-level security

You’ll be able to see the relevant policies that are applied to BigQuery’s columns within the BigQuery schema pane. If attempting to query a column you do not have access to (which is clearly indicated by the banner notice as well as the grayed-out nature of the field), the access will be securely denied. Access control applies to every method used to access BigQuery data (API, Views, etc.). 

Here’s what that looks like:

drug_and_treatments.jpg
Schema of BigQuery table. All but the first two columns have policy tags imposing column-level access restrictions. This user does not have access to them.

We’re always working to enhance BigQuery’s (and Google Cloud’s) data governance capabilities to provide more controls around access, on-access data transformation, and data retention, and provide a holistic view of your data governance across Google Cloud’s various storage systems. You can try the capability out now. 

The top 7 Google Maps Platform YouTube videos for building with maps 31 Mar 2020, 9:00 pm

Tens of thousands of developers view Google Maps Platform videos every year—from adding a static map to a website to building even more advanced mapping features into mobile applications. If you’re just getting started with maps, or are looking for a refresher, here’s a look at a few of our most-viewed videos to help you get started using Google Maps Platform today. 

Be sure to visit the channel to see even more videos and subscribe to stay up to date. 

How to create and attach a billing account to a Google Cloud Platform project

In this video, Emily Keller shows how to create a Google Cloud project and attach a billing account to it.

In this video, Emily Keller shows how to create a Google Cloud project and attach a billing account to it.

How to enable Google Maps Platform APIs and SDKs

In our most-watched video to date, Emily Keller explains how to enable Google Maps Platform APIs and SDKs.

In our most-watched video to date, we explain how to enable Google Maps Platform APIs and SDKs.

How to generate and restrict API keys for Google Maps Platform

Find out how to generate and restrict API keys for use with Google Maps Platform.

Find out how to generate and restrict API keys for use with Google Maps Platform.

How to add a Map using Static and Embed APIs

How to add a Map using Static and Embed APIs

Learn two ways to add Maps to your web page without using JavaScript. The Maps Static API and the Maps Embed API are here to help

How to add a Map using Static and Embed APIs

How to add Maps using the JavaScript API

In this video, learn how to add a simple Google map with a marker to a web page using the Google Maps Platform JavaScript API.

How to add a Map using iOS SDK

How to add a Map using iOS SDK

Learn how to add a simple Google map with a marker to your iOS app.

Working with markers: custom markers and marker clustering

This video features two ways to highlight points on your maps using custom markers and marker clustering.

This video features two ways to highlight points on your maps using custom markers and marker clustering.

For more information on Google Maps Platform, visit our website.

Introducing Service Directory: Manage all your services in one place at scale 31 Mar 2020, 4:00 pm

Enterprises rely on increasing numbers of heterogeneous services across cloud and on-premises environments. Google Cloud customers, for example, may use services like Cloud Storage alongside third-party partner services such as Snowflake, MongoDB, and Redis, as well as their own company-owned applications. Securely connecting to and managing these multi-cloud services can be challenging, especially as resources need to scale up and down to meet fast changing business needs.

Customers want to be able to take a service- rather than infrastructure-centric approach to connecting to Google Cloud services, their own applications, and third-party partner services from GCP Marketplace. Service Directory is a new managed solution to help you publish, discover, and connect services in a consistent and reliable way, regardless of the environment and platform in which they are deployed. It provides real-time information about all your services in a single place, allowing you to perform service inventory management at scale, whether you have a few service endpoints or thousands.

Google Service Directory.gif

Simplify service management and operations

Service Directory reduces the complexity of management and operations by providing unified visibility for all your services across cloud and on-premises environments. And because Service Directory is fully managed, you get enhanced service inventory management at scale with no operational overhead, increasing the productivity of your DevOps teams. At the same time, advanced permission capabilities let you ensure that only the correct principals (users and applications) are able to update this information or look up services, freeing service developers from worrying about accidentally impacting other services.

Connecting hybrid and multi-cloud services at scale

With Service Directory, you can easily understand all your services across multi-cloud environments. This includes workloads running in Compute Engine VMs, Google Kubernetes Engine (GKE), as well as external services running on-prem and third-party clouds. It increases application reachability by maintaining the endpoint information for all your services. Service Directory lets you define services with metadata, allowing you to group services, while making your endpoints easily understood by your consumers and applications. Customers can use Service Directory to register different types of services and resolve them securely over HTTP and gRPC. For DNS clients, customers can leverage Service Directory’s private DNS zones, a feature that automatically updates DNS records as services change. 

Let’s connect

For more on Service Directory, check out this video. Click here to learn more about GCP’s networking portfolio and reach out to us with feedback at gcp-networking@google.com.

How Google Cloud is helping during COVID-19 31 Mar 2020, 4:00 pm

We’re all in the midst of an extraordinary moment—not only for our teams, colleagues, and customers, but for the world at large. The impact of the novel coronavirus (COVID-19) has created many new challenges, and for many of us, has required that we adopt new ways of working. 

All over the world, businesses and users depend on Google Cloud to help them stay connected and get work done. And we take this responsibility very seriously. Today, I want to share many of the ways we’re working to support businesses, government institutions, researchers and one another. 

How we’re helping workers stay safe and productive 

Empowering remote workers to stay connected
As more and more businesses rely on connecting an at-home workforce to maintain productivity, we’ve seen surges in the use of Google Meet, our video conferencing product, at a rate we’ve never witnessed before. Over the last few weeks, Meet’s day-over-day growth surpassed 60%, and as a result, its daily usage is more than 25 times what it was in January. Despite this growth, the demand has been well within the bounds of our network’s ability.

Because we know how critical keeping colleagues connected and engaged is for business continuity, we’ve made the advanced features in Google Meet free to all G Suite and G Suite for Education customers globally. We’ve also made Meet Hardware available in additional markets, including South Korea, Hong Kong, Taiwan, Indonesia and South Africa, to ensure customers have the right hardware to complement their Meet solution.  

We’ve heard from a number of enterprises that G Suite has helped them make the transition to remote work. The MACIF Group, a leading French mutual insurance provider, was able to ensure business continuity and maintain the link between its employees with G Suite, already deployed to more than 8,000 employees. MACIF staff shifted from in-person meetings to more than 1,300 Google Meet video meetings daily, and the use of collaborative virtual rooms facilitated important human contact and responsiveness in an unexpected period of remote work.

Korean gaming company Netmarble told us G Suite helped them make the company-wide transition to working from home smoothly, saying, “With video conferencing through Google Meet, collaboration via Google Docs, and all data accessible on Google Drive, there's really no difference when working from the home or the office.” 

Providing training opportunities to upskill employees
As people transition to remote work and learning in response to COVID-19, many are looking to build their skills and increase their knowledge while at home. To help, we’re offering our portfolio of Google Cloud learning resources, including our extensive catalog of training courses, hands-on labs on Qwiklabs, and interactive Cloud OnAir webinars at no cost until April 30. Anyone can gain cloud experience through hands-on labs no matter where they are—and learn how to prototype an app, build prediction models, and more—at their own pace. Teams can also build their skills through our on-demand courses on Pluralsight and Coursera. Our most popular learning paths, including Cloud Architecture and Data Engineering, are now available for all

How we’re helping public sector agencies and educational institutions

Supporting government efforts to fight COVID-19
We’re working with governmental organizations around the world on projects such as developing AI-based chat technology to help overtasked agencies respond more quickly to citizen requests; bolstering government websites that get critical information to the public with free content delivery network (CDN) and load-balancing services; and providing services and tools to track the spread of the virus.

In the U.S., we are working with the White House and supporting institutions to develop new text and data mining techniques to examine the COVID-19 Open Research Dataset (CORD-19), the most extensive machine-readable coronavirus literature collection to date. 

We’re also working with state agencies like the Oklahoma State Department of Health on solutions for medical staff to engage remotely with at-risk people who may have been exposed to the coronavirus. Within 48 hours, the department deployed an app that allowed medical staff to follow up directly with people who reported symptoms and direct affected citizens to testing sites. We worked with our partnerMTX Group to create the app and are now deploying it with governments in Florida, New York, and many other states so they can use our tools for insights into how the virus’s spread is affecting citizens and state healthcare systems.

Internationally, we’re working with a number of governments to provide collaboration solutions and tools to track the spread of COVID-19. For example, in Spain, we’ve set up an app for the regional government in Madrid to help citizens perform self-assessments of coronavirus symptoms and offer guidance, easing the demands on the healthcare system. The Spanish national government is also planning to deploy this app across other regions in the country in the coming days. In Italy, the more than 70,000 employees working in the Veneto region’s healthcare system are relying on G Suite to maintain their high level of service and patient care during the COVID-19 crisis. This week, the Australian Government Department of Health launched its Coronavirus Australia App. Built on Google Cloud, the app offers real-time information and advice about the fast changing COVID-19 pandemic.

And in Peru, the Judiciary branch is using Google Meet to continue operating during the nation-wide quarantine. Through video conferences they are carrying out both internal meetings and also hearings. By doing this, attorneys, lawyers and judiciary clerks don’t have to physically attend court, keeping the virus from spreading, while maintaining the administration of Justice in the country.

Assisting educational institutions with content, tools, and distance learning 
Educational institutions have been particularly impacted by the coronavirus, and we’re undertaking a number of initiatives to support them, ranging from providing free content and educational tools to supporting distance-learning initiatives that help educators continue teaching students who are at home.

For example, in recent weeks, we rolled out Google Classroom to more than 1.3 million students in New York City so they can continue their school year virtually at home. And we continue to provide critical infrastructure for nonprofit educational organization Khan Academy, which supported 18 million learners per month before the crisis. Since school closures began, Khan Academy is seeing record growth across all metrics: Time spent on the site is approximately 2.5 times normal, student and teacher registrations are up roughly six times from this period last year, and parent registration is up 20 times normal. 

In Malaysia, where schools are closed in response to COVID-19, we’ve been hosting daily webinars for teachers, bringing them up to speed on how they can leverage Google tools to teach from home

And in Indonesia, we provided the technology infrastructure for online education services platform Ruangguru, which opened a free online school service in response to school closures in Indonesia and was tapped by more than a million learners on day one.

In Italy, we worked with the Italian Ministry of Education—the governing body accountable for millions of Italian schoolchildren—to rapidly shift students entirely to remote learning. Our teams banded together, and engineers worked around the clock to speed up the enrollment process, even making a virtual help desk available for timely activation and support. As a result, the Ministry of Education was able to help bring millions of students online in a matter of days.

How we’re helping other organizations 

Supporting researchers, hospitals, and more
Healthcare is the most impacted industry during the pandemic, and technology can be a critical tool to help. We’re providing solutions for the health research community to identify new therapies and treatments; and assist hospital systems with tracking the pandemic and providing telehealth and remote-patient monitoring solutions.

In health research, we’re making several COVID-19 public datasets free to query like Johns Hopkins Center for Systems Science and EngineeringCOVID-19 data, the U.S. Census Bureau's American Community Survey data, and OpenStreetMaps data. We’re also providing $20 million in Google Cloud credits to academic institutions and research organizations as they study potential therapies and vaccines, track critical data, and identify new ways to combat COVID-19. Researchers at accredited academic institutions can submit a proposal to the COVID-19 High Performance Computing Consortium, while other researchers who need Google Cloud capacity for work on COVID-19 can submit proposals directly to us.

Last week, we joined the COVID-19 Healthcare Coalition, a group of healthcare, technology, and research organizations who have come together to share resources in order to fight the virus. Coalition members include athenahealth, Mayo Clinic, University of California Health System, and others. As part of the coalition, we’re helping build a data exchange that allows coalition members to safely and securely share and analyze data—ultimately enabling many of the world’s top researchers with data to work together.

We’re also supporting hospitals in several ways. In Asia, Since the COVID-19 outbreak, more people have been turning to Doctor Anywhere’s telemedicine services, and opting for video consultations with locally-registered doctors and medication delivered to their doorstep. According to Rishik Bahri, Chief Technology Officer, Doctor Anywhere, “We’ve seen a more than 70% increase in traffic on our telehealth application since the coronavirus outbreak, and it’s more important than ever to deliver frictionless access to users and partners alike on the Doctor Anywhere app.”

In the UK, the NHS is exploring the use of G Suite to allow them to collect critical, real-time information on hospital responses to COVID-19, such as hospital occupancy levels, and accident and emergency capacity. 

Helping retailers, manufacturers, and other businesses handle demand
Businesses globally are facing unprecedented challenges in terms of forecasting demand from customers and the impact of COVID-19 on their overall supply chains. To help on the demand side, we’ve activated our Black Friday/Cyber Monday Protocol for retailers and other businesses seeing exponential traffic increases—bringing professional services, technical account managers and Customer Reliability Engineering resources together to support, plan and react to user demand during these peak times.

One of Canada’s largest retailers, Loblaw, asked for our help to support an increase in traffic to its PC Express grocery delivery and pickup platform. The Google Cloud team provided them with the resources to ensure they could scale, helping people get food and other critical goods during this time. As said by Hesham Fahmy, GM at Loblaw Companies Limited, “The Google Cloud team has been a fantastic partner during this ever changing time. We truly appreciate the level of ownership, care and help Google has been providing. It is for a great cause, to make sure Canadians’ don’t have to stress about their essential needs in these uncertain times.” 

German luxury fashion retailer Breuninger employs 5,000 staff and decided to temporarily close its 11 department stores and focus on its online shop only. All staff suddenly working remotely presented big challenges as their existing collaboration tools for video conferencing proved unable to deal with that sudden increase of usage. Google Cloud helped to get more than 1,100 Breuninger employees live on G Suite within 48 hours—with more employees to be added over the next few days. In addition to this, they are exploring how to interact with customers through new digital services enabled by G Suite.

Providing a stable platform for telecom, media and entertainment 
Communications and entertainment companies are seeing challenges as varied as they are. While the telecommunications industry is working hard to keep people connected, the media industry has seen demand increase as people look for news and entertainment, and the video game industry has also experienced a large spike in usage as more people are staying home. We are working with some of the largest news agencies and game publishers so that people can stay informed and have some fun during this challenging time.

Telecommunications providers are leveraging our technology to deliver services as seamlessly as possible. For example, Vodafone is using GCP to analyze both network traffic and traffic prioritisation to direct bandwidth to users that need it most.

In media, we helped the broadcast team at Yahoo Finance transition 150 reporters, producers, anchors and technicians from a legacy TV studio to a 100 percent work from home model overnight. Within the span of a few hours, our team worked with them to set up a seamless eight hours of live broadcast, via Google Meet, on air from locations across the U.S. and London, providing people with critical news and information in this particularly uncertain time. 

In gaming, Unity Technologies, which recently partnered with the World Health Organization (WHO) on a new #PlayApartTogether initiative, has seen player demand for online games significantly increase due to COVID-19 social distancing mandates. Despite these huge spikes in gaming activity, Unity’s Multiplay server hosting solution has so far not seen any downtime. Unity's partnership with Google Cloud has helped them ensure real-time online games stay up and running and continue to deliver great player experiences, regardless of demand surges. 

Looking ahead

Although we’re all facing an extraordinary moment of uncertainty, I’m proud to report that at Google Cloud, we’re prepared—we’ve activated remote customer service agents and our enhanced support protocol for peak periods, we’ve detailed plans to manage our capacity and supply chain, and we’ve rigorously tested the resilience of our infrastructure and processes. All of these preparations have been put in place to ensure we can best support our customers during a time like this.

We’ll continue to work tirelessly on these and other initiatives to support our users, customers, and communities in this time of need. I’m so grateful to the many extraordinary Cloud Googlers that have worked so hard to provide so many capabilities for our customers.

Customer engineers bring people and technology together 30 Mar 2020, 4:00 pm

Editor’s note: We’re celebrating Women’s History Month by talking with Cloud Googlers about identity and how it influences their work in technology. 

At Google Cloud, our customer engineers bring technology and people together, helping our customers choose the right tools to solve their problems, and create the solutions that will help them keep growing. We talked to a few customer engineers about their path to customer engineering, their technology passions, and what advice they offer to other women in technology.

“If I Only Had a Heart”: Being human in tech beats out a brains-only approach

Kristin Aliberto, Customer Engineer

I was lucky to study computer science in high school; I came of age at a time people were loudly claiming there wouldn’t be any tech jobs in the U.S. Undergrad saw me following my passion for history and teaching instead. But after finishing my degree at Temple University, I ran IT support for our Student Center and became fascinated with end user technology. Again, I was lucky: our school was an early adopter of Google Apps for Education. 

That was my first interaction with the cloud, and led to my first big tech job—working as the support lead for New York University’s Google Apps project.

Technology as a language for solving problems. My social sciences background comes to the forefront when I’m considering data. Accepting quantitative metrics without question often leads to a fundamental misunderstanding of what those metrics actually represent. By extension, how you solve the related problems is impacted. 

Similarly, when I hear technology requirements from a customer, they are often focused on the “how.” What I want to know is “why.” Once we have the “why”, we can partner with our customer to identify the “what” - the solution - along with the “how”. 

Even if Google Cloud doesn’t have a “how” for the customer today, by working in partnership with the customer on the “what” and more importantly, the “why”, we provide a valuable service. 

By joining Google, I have been able to explore many different kinds of customer problems. I can get hands-on with cool technology, and use it to solve intricate challenges both on the job - and outside of it. I dabble in new technologies whenever I can. My latest homebrew experiment was building a Kubernetes cluster from retired Chromeboxes. As a female rugby coach, I’m often frustrated by standard machine learning around sports video analysis, because models are usually trained for male bodies. I’ve made experimenting with that a priority, to help myself, my players and women’s athletics - a subject I have a deep passion for. 

My advice to other women is simple: know your value. 

Cultivate an awareness of what your value is. Surround yourself with women who will both challenge and support you. Understand that the field you play on was not set by you, but don’t let it stop you. Don’t be ashamed to play the game as the current rules dictate, and never miss an opportunity to challenge those rules. 

Level the playing field for those who come after. 

Connecting customers with technology

Roshni Joshi, Director, Customer Engineering

Growing up in India, I had great role models surrounding me - my mom is an anesthesiologist and lots of my aunts’ have advanced degrees in sciences and followed professional careers as professors and teachers. My undergraduate degree was in electrical engineering, but after my first internship, I realized it wasn’t my thing. I needed a profession where I could interact more with people. I switched my focus to get a graduate degree in Computer Science—which was quite the journey, since I had limited programming experience when I started! I worked first as an SAP consultant, then moved into program management and practice management and then presales.

My biggest draw to Google is our culture and our commitment to open source technologies —it’s a point of pride for us that we don’t lock users in. Many of today’s hyperscale apps supporting the digital economy were open sourced at Google—like Chromium, Kubernetes, and TensorFlow. Sharing those capabilities with our enterprise customers, and helping apply them to their problems, is really exciting. I love understanding “the why” and solving for it with “the what”. Right now though, in the current time, being a part of this company, and seeing our response to the coronavirus crisis, through our people, resources and technology is humbling and inspiring - I am so proud to wear the Google badge every day, but especially now.

It’s easy to be a victim of impostor syndrome and self-doubt or think you’re not qualified to be in a certain role or field—I see this a lot in women, and I experience it myself. I tell women newer to the workforce to hold their impostor syndrome in healthy balance with their confidence - be honest about your strengths and weaknesses. If you want something, pursue it and pursue it with passion. Apply for that job opening, make that career change or ask for a raise. No one else will ask for you. You have to do it yourself. The worst that can happen is you face a no. 

Also, remember, what will push you furthest in your career is grit and tenacity. If you’re feeling overwhelmed, just remember the importance of doing small things better every day. If you keep getting small wins in inches, over time, they add up to a mile.

Bringing scientific rigor to cloud problems

Vanessa July, Customer Engineer

I studied chemistry and nanotechnology and spent my undergraduate years doing lab work. Once I got out of college, I didn’t really want to pursue a PhD or a post-doc and I had a mentor who worked in technology. He recommended that I apply to HPE’s tech boot camp for recent grads, and through his sponsorship, I started my tech career at HPE in Sales Engineering. Shortly after joining Google Cloud, I got the chance to work on the Higher Education team, and working with researchers really reignited my love of science. I realized that though I didn’t want to do lab work as a career, I could still work closely with scientists and help facilitate their work with cloud computing.

I always tell people that you don’t have to have studied computer science or coding to work in technology. In fact, the scientific method I learned in my undergrad years has served me in every other part of my life. Science involves asking unknown questions, creating a hypothesis, and designing an experiment to test it. It requires you to find data to support or reject that hypothesis. That’s what I do now—figuring out the unknowns and building a framework to solve for them. Because solving customer problems looks different every time, it’s important to keep the number of variables as few as possible for consistency, then re-asses and adjust as needed. For example, to scale a computing environment, you may not know at the onset what you’ll uncover as you add more cores and resources, so you need to be able to test intelligently and have a backup plan. 

I’m stubborn, so I advise those newer to the workforce to have some tenacity. Ultimately, you have to ask for what you want. Know your expectations and be comfortable communicating them - especially to mentors or sponsors who can help you. My mentor knew I was smart enough to learn about technology on the job, and his sponsorship took me into that first job. If someone is willing to advocate for you, then let them know what you want - whether that’s a promotion, a raise, or another goal. Create a clear understanding for those helpers so they are able to lift you up.

COVID-19 public dataset program: Making data freely accessible for better public outcomes 30 Mar 2020, 4:00 pm

Data always plays a critical role in the ability to research, study, and combat public health emergencies, and nowhere is this more true than in the case of a global crisis. Access to data sets—and tools that can analyze that data at cloud scale—are increasingly essential to the research process, and are particularly necessary in the global response to the novel coronavirus (COVID-19).

To aid researchers, data scientists, and analysts in the effort to combat COVID-19, we are making a hosted repository of public datasets, like Johns Hopkins Center for Systems Science and Engineering (JHU CSSE), the Global Health Data from the World Bank, and OpenStreetMap data, free to access and query through our COVID-19 Public Dataset Program. Researchers can also use BigQuery MLto train advanced machine learning models with this data right inside BigQuery at no additional cost.  

“Making COVID-19 data open and available in BigQuery will be a boon to researchers and analysis in the field,” says Sam Skillman, Head of Engineering at Descartes Labs. “In particular, having queries be free will allow greater participation, and the ability to quickly share results and analysis with colleagues and the public will accelerate our shared understanding of how the virus is spreading.”

These datasets remove barriers and provide access to critical information quickly and easily, eliminating the need to search for and onboard large data files. Researchers can access the datasets from within the Google Cloud Console, along with a description of the data and sample queries to advance research. All data we include in the program will be public and freely available. The program will remain in effect until September 15, 2020. 

“Developing data-driven models for the spread of this infectious disease is critical,” said Matteo Chinazzi, Associate Research Scientist, Northeastern University. “Our team is working intensively to model and better understand the spread of the COVID-19 outbreak. By making COVID-19 data open and available in BigQuery, researchers and public health officials can better understand, study, and analyze the impact of this disease.”

The contents of these datasets are provided to the public strictly for educational and research purposes only. We are not onboarding or managing PHI or PII data as part of the COVID-19 Public Dataset Program. Google has practices and policies in place to ensure that data is handled in accordance with widely recognized patient privacy and data security policies.

We on the Google Cloud team sincerely hope that the COVID-19 Public Dataset Program will enable better and faster research to combat the spread of this disease. Get started today.

Loading geospatial data into BigQuery just got easier with FME 27 Mar 2020, 5:30 pm

With all of the geographical data available today, we made sure that Google Cloud’s BigQuery data warehouse includes first-class support for geospatial data types and functions. With this unique capability, you can process and analyze geospatial data at scale. To accelerate the workflows of our geospatial customers, we’re announcing our partnership with Safe Software, the maker of FME.

FME is a data integration platform designed to support spatial data worldwide, and 2020.0 brings the ability to ingest data from more than 450 geo formats and applications and materialize them as BigQuery tables.

FME is ideal to use when you need to ingest and transform one of the myriad geospatial file and data formats and land that data in BigQuery. FME is designed to help you overcome common data integration challenges. Using a visual interface, you can build workflows to extract, transform, load, integrate, validate, and share data. Plus, you can build event-based workflows to automate your data integration tasks, create notification services, and take advantage of real-time processing.

Using FME to connect to BigQuery GIS

There are hundreds of GIS file types and projections. Loading them into a data warehouse requires transforming the data type and its projection into the native projection of the data warehouse. In this case, BigQuery GIS uses the WGS84 coordinate system. Workflows are scalable: When you build a data integration workflow in FME, you can ingest a single file or hundreds at a time, transform them, and load them directly into BigQuery tables, all within FME. 

Here’s a look at the FME Workbench interface, the authoring environment for data integration workflows.

FME Workbench interface.jpg

FME supports hundreds of formats, applications, and systems, and includes 493 transformers to help data manipulation tasks like geometry and geography creation, validation, generalization, and coordinate system reprojections. With data coming from different sources, data validation and quality control is a critical step in your workflow. The transformers are accessible through the GUI and let you create consistent and repeatable spatial data pipelines. That way you can make sure the data migrated to data warehouses like Google BigQuery is valid and meets all requirements. 

Using geospatial data in production

We’ve heard from customers using FME and BigQuery that they can move and transform data more quickly to focus on innovation.

"We've been using FME to transform shp, GeoJson, CAD and various other data types for several years,” says Adam Radel, IT director for GISP, State of Utah Department of Transportation. “We've imported over 1,000 files to BigQuery using FME in just a few weeks. The addition of the BQ writer to FME is game-changing for us. I'm excited that my team has such a powerful tool available to them."  

How to load geospatial data from FME to BigQuery

Check out this detailed look at how the loading process works. The example starts with the assumption that the user has data to load, a shape file for an example, and a licensed or trial version of FME Desktop installed on a virtual machine or on their local machine. (Note: FME has example files for loading.)

If you don’t have FME running already, get started with a trial straight from the Google Cloud Marketplace, and check out these instructions on how to deploy FME in Google Cloud.

To use the data that you load with FME, you can also check out examples of how to query data in BigQuery GIS, like plotting hurricane paths in BQ GIS or k-means clustering of spatial with BQML. We modeled the BQ GIS syntax on PostGIS, so you’ll find the queries easy to compose as well. And click on Explore in GeoViz from the user interface to get a quick styleable visualization of your results. For a more scalable, cloud-native GIS visualization solution that can use BigQuery as a scalable spatial backend, take a look at our partner CARTO.

G Suite developers: Scaling best practices for higher user demands 27 Mar 2020, 4:00 pm

As people around the globe increasingly work or learn from home, we want to recognize the impact this shift is having on software providers. Many third-party developers rely on G Suite APIs to deliver richly integrated experiences to users, and it’s crucial for them to plan proactively for potential risks.

To support developers as usage surges, we’re sharing our recommendations for how to best prepare for new scaling demands and user onboarding, as well as some tips on where to get help.

1. Plan for increased capacity

Technology platforms across the ecosystem, and in particular education platforms, are seeing unprecedented increases in usage as many businesses, schools, and universities move towards online and distance learning. If you expect that the usage of your application(s) will increase significantly, we highly recommend you plan for this proactively by considering the following: 

Google API quota needs: If your application depends on any Google APIs, we recommend that you estimate what your increased traffic might look like so you can make any appropriate quota increase requests. If you find that you will need a quota increase, you should submit a request ASAP with details on how you arrived at your estimations. 

Here’s how to request a quota increase for any G Suite APIs:

  • If you don't already have a billing account for your project, create one.

  • In the API Console, visit the Enabled APIs page and select an API from the list.

  • To view and change quota-related settings, select Quotas. To view usage statistics, select Usage.

Note: For quota increase requests related to YouTube APIs, please submit a request here

Performance and scalability with Classroom, Drive, and Gmail: When you’re regularly sending a large number of requests, you might receive 403 error responses with reasons such as dailyLimitExceeded, userRateLimitExceeded, or quotaExceeded. To handle these responses gracefully, Classroom has updated its developer docs to include more information on common request errors and how to handle them. For more suggestions on how to optimize your application with Classroom, read these tips for improving performance and batching API requests

2. Solve new user issues related to distance and online learning 

As the necessity for online and distance learning increases, schools and districts are being onboarded to G Suite on a domain-wide basis. This makes the APIs that enable developers to programmatically manage accounts and Google Groups all the more crucial. For developers interested in Hangouts Meet integrations—or expanding their knowledge of these integrations—here’s a guide on how to leverage Meet functionality through other G Suite APIs, including the Calendar API and Admin SDK Reports API. 

For developers looking to explore other G Suite integration options, we recommend reviewing the developer offerings for each of our products here

3. Know your channels of support

Below is a list of support options that can help developers quickly find answers.

  • Stack Overflow: Each G Suite API has its own tag for developers to post technical questions to. Each tag is of the format ‘google-productname’ or ‘google-productname-api’, e.g. google-classroom, google-drive-api, and google-calendar-api. 

  • Public Issue Trackers: Google uses public issue trackers to collect feedback from third-party developers. These issue trackers collect all the information needed to investigate so that our support teams can respond as soon as possible. When reporting an issue, please include as many details as possible, including the steps to reproduce the issue, relevant endpoints, request and response JSON, error logs, etc. A list of public issue trackers for all G Suite APIs can be found here.

As you respond to increasing or changing user demand, we encourage you to reach out for support through our different channels and take advantage of documentation highlighting different ways you can leverage G Suite to support users.

A note to our customers: How we’re supporting you through COVID-19 27 Mar 2020, 4:00 am

As COVID-19 makes its way across the globe, we know you are under extraordinary pressure to keep your organizations up and running. You rely on Google Cloud to stay connected and get work done every day. Whether it's helping you run a high-demand e-commerce site, augment your call center staff with Contact Center AI, power timely research, or enable employees to work from home efficiently and securely, we’re committed to keeping our services accessible to customers around the world. Today, we want to provide you with some details on our preparedness.

Below are two videos that discuss our business continuity plans from a technical and customer support perspective, from myself and our vice president of 24x7 reliability, Ben Treynor Sloss. 

Keeping our systems up and running for customers

Now more than ever, keeping our systems up and running is our No. 1 priority.

How we're supporting Google Cloud customers

For more than a decade, Google has conducted regular disaster recovery testing (DiRT) to rigorously evaluate the resilience of our infrastructure and processes, led by our highly trained site reliability engineers (SREs). Through this testing, our teams are trained to find and address potential issues before they arise and, in the event of a disruption, recover as quickly as possible.

In terms of team structure, our SREs have historically operated from two or more locations to deliver 24x7 coverage. They’re in constant communication with our leadership team and actively monitoring global and local conditions. 

As for technical readiness, Google Cloud relies on massive amounts of compute and storage hardware to power your cloud workloads and G Suite. Since much of that hardware is proprietary, we can forecast capacity forward many months to build ahead of demand. We’re monitoring capacity closely and do not foresee shortfalls at this time. 

In addition, we maintain considerable reserve capacity both inside our network and at hundreds of points of presence and thousands of edge locations. The performance of our infrastructure remains as high as it was before the pandemic—the result of years of preparation.

Making sure our people are here when you need them

How we're supporting Google Cloud customers part 2

To keep lines of communication open between our teams and yours, we’ve provisioned our support agents with remote access so they can support you securely while working from home. And in the event of disruption to any of our support centers, we've identified primary, secondary, and tertiary backups for each site. Our SRE and product teams are closely integrated into our plans, ensuring the right experts are available to address complex issues. 

Industries like retail, media and healthcare are experiencing surges in e-commerce traffic, prolonged demand for streaming services, and new requirements to support telemedicine. For our customers on the front lines with specialized needs, we’ve activated our enhanced support structure—developed for peak demand situations like we see on some of the heaviest traffic days of the year.  

And finally, we’re enabling your teams to collaborate remotely. We’re offering the premium version of Hangouts Meet for free to existing customers, allowing you to host 250 participants per call, live stream meetings for up to 100,000 viewers within a domain, and record meetings and save them to Google Drive. 

Staying connected

We have a number of levers we can pull to prevent service disruptions and ensure your critical workloads have access to sufficient capacity to remain available and performant. We’re committed to maintaining the health of the systems that power your business, and will continue to keep you informed in the days and months ahead.

Simplifying Google Drive’s folder structure and sharing models 26 Mar 2020, 7:10 pm

The G Suite team has been working hard to make it easier to organize and share content in Google Drive, and help direct users to relevant files across various drives. 

These efforts have resulted in Drive shortcuts, which are files that act as pointers to other files in Google Drive. Shortcut files can be stored anywhere in Google Drive, including a shared drive or an individual user’s “My Drive.” 

Shortcuts, which are now generally available, will also require those of you who build on the Google Drive API to plan for some upcoming changes. This will ensure your apps continue to work properly, and enable you to take advantage of the latest features in Drive.

Beginning Sept. 30, 2020, it will no longer be possible to place an item in multiple folders; every item will have exactly one location. In the new model, shortcuts can be used to organize content in multiple hierarchies. This simplification of Drive's folder structure and sharing models will result in a change in the way some Google Drive API endpoints behave.

Developers are now able to opt in to the new model to develop and test their apps. We have introduced a new enforceSingleParent request parameter on affected endpoints. To opt in to the new behavior, set its value to true on the requests you make to the Google Drive API. If you choose to opt in ahead of time, the eventual enforcement will cause no further changes to your app’s behavior.

After Sept. 30, 2020, we will begin migrating all items in Drive to a one-parent state. Any other parent-child relationships will become shortcuts in the former parent folders. We will adaptively select the most suitable parent to keep, based on the hierarchy's properties.

The specific changes to the API’s behavior are:

  • You can only add a parent for an item that doesn't already have one. This affects the children.insert (v2), files.update (v2 / v3) and parents.insert (v2) endpoints. You can use the new canAddMyDriveParent capability to check if an item currently has no parents and if the user has sufficient access to add a parent for the item.

  • A request that creates a new item can no longer specify multiple parents. This affects the files.insert (v2), files.create (v3) andfiles.copy (v2 / v3) endpoints.

  • Moving an item will require access to share the item. Previously, only read access to the item was required. If the requester cannot share an item, they should create a shortcut to it instead. This affects the files.update (v2 / v3) endpoint. You can use the canMoveItemWithinDrive capability to check if the user has access to move an item.

  • An item’s owner will now be able to move their item into a new location, removing all of the item’s current parents, even if they don’t have access to those parents. All access inherited from those parents will be removed. Access that was added directly on the item will be preserved. Previously, the owner could add the item to any folder, causing it to become multi-parented, but this option will no longer be available. This affects the children.insert (v2), files.update (v2 / v3), and parents.insert (v2) endpoints.

  • Any operation that would have previously resulted in an item no longer having parents will now result in the item being parented under its owner’s My Drive. This affects the children.delete (v2), files.update (v2 / v3), and parents.delete (v2) endpoints.

  • When transferring ownership, the requester will be able to control whether the transferred item is moved to the new owner’s root or kept where it is. If they choose to move the item, any access inherited from the previous parent will be lost, but access that had been directly added to the item will be preserved. The previous owner will always maintain editor access to the item, just as they had prior to these changes. This affects the permissions.insert (v2) and permissions.create (v3) endpoints.

For more information, take a look at our updated developer documentation. We have introduced pages dedicated to single-parenting behavior changes (v2, v3) and the steps app developers need to take to migrate (v2, v3). The API reference has been updated to describe the new parameters in the contexts of each affected endpoint. We hope Drive shortcuts simplify how content is organized and shared, and that these changes enable your apps—and your users—to take advantage of these new features.

When girls are the shero of the story 24 Mar 2020, 4:00 pm

Editor’s note: We’re celebrating Women’s History Month by talking with Cloud Googlers about identity and how it influences their work in technology. 

Cloud Googler Komal Singh’s path has taken her from India to Waterloo, Canada, where she’s an engineering program manager working on serverless products. Her 20% project at Google resulted in the publication of her first children’s STEM book, Ara the Star Engineer, which follows a young girl who uses coding to tackle big dreams and meets real-life women trailblazers. Her recent TED Talk, “Recoding Stories at Scale,” talks about exploring technology and AI in creative ways to represent minorities and girls in books in ways that inspire them.

Here, she shares her path to working in technology

Who inspired you to go into engineering?
I grew up in India in the 1980s, and always loved sci-fi, physics, and math. I didn’t know female engineers, but I knew women who were doctors, and we had a female prime minister in India—so I assumed women could be prime ministers, but not engineers. My dad had a huge influence on me. He always encouraged me to be more hands-on, and showed me how to do things like change a lightbulb or fix the car engine. During dinner conversations, he created problems for me to think about, like how many rotations the fan was doing per minute.

In high school, I was amongst the few girls taking computer science courses. We usually worked together, and when we got a program to run, teachers thought it was a fluke, or that we were copying others’ work. There was extra pressure to prove that we had gotten it right ethically. When you’re part of a small percentage like that, it’s harder to be heard, and it’s easy to start doubting your abilities.

I also loved watching Dana Scully on the X Files TV show. There’s actually a “Scully Effect” phenomenon that’s been researched, which found that more than 70% of women who watched that show went on to STEM fields. I wish I had also had someone to look up to who wasn’t white, with blond hair. I think I would be a more fearless leader now. I’m grateful now that I have role models here at work, senior women who I look up to. 

I want my daughter to see herself represented in ways that I didn’t. When my daughter was four, she told me that engineers are boys. As a woman of color and first-generation immigrant, I wanted to do something for her so she would know that wasn’t true. So I started a 20% project [a Google option for employees to explore topics of interest] to write a children’s book.

Why use books as a way to change perceptions?
The pipeline for getting girls into engineering and other STEM fields starts when they are about six. There are many initiatives being started, like Girls Who Code, Canada Learning Code, and Black Girls Code, but we need more funding for efforts like this. It can be hard to scale these programs, but books can operate at scale. Books are so pervasive, and can really influence kids as an everyday object. For kids, seeing people who look like them in books is really important.

Less than 5% of kids’ books feature people of color in lead roles. I wanted to put technology to good use, so I started a 20% project to create a series of books that feature more girls and women of color. In parallel, this project is working on making storytelling more inclusive, and we’re using AI to experiment with making traditional characters more racially diverse, so a reader could see Goldilocks as a black or Asian girl, or as a non-binary character, for example.

The book has been published in 10 other countries, and my daughter has traveled with me to some of these book launches. When a journalist in China asked her what she wanted to be when she grew up, she replied “an author and an engineer.” I love the fan mail that I get about the book. Girls around the world want to be problem solvers. I also hope my TED Talk on recoding stories will inspire more people to take action to make kids’ literature more equitable.

What advice do you give to those newer to the workforce?
Persistence pays off! I tried three times to work at Google over five years across different locations and job roles. The third time worked for me. Stay the course. Don’t be tempted to give up. And remember to be a wholesome person, whatever that means for you. For me, it’s being a good mom, having a meaningful career, and not giving up on my own hobbies and time for myself. It can be tough, but remember that your career isn’t a linear path. It will take turns along the way. This 20% project, for me, has opened up truly valuable opportunities that I didn’t foresee.

Simplified global game management: Introducing Game Servers 23 Mar 2020, 4:15 pm

To deliver the multiplayer gaming experiences gamers expect, game developers are increasingly relying on dedicated game servers as the default option for connecting players. But hosting and scaling a game server fleet to support a global game can be challenging, and many game companies either end up building costly proprietary solutions, or turning to pre-packaged solutions that limit developer choice and control.

Agones, an open source game server hosting and scaling project built on Kubernetes, was cofounded by Google Cloud and Ubisoft to offer a simpler option. It provides a community-developed alternative to proprietary solutions that also gives developers the freedom to seamlessly host and scale game server clusters across multiple environments—in multiple clouds, on premises, or on local machines.

Alejandro Gonzalez, GM Jam City Bogota shared his experience using Agones for the real-time strategy mobile game World War Doh: "Agones was a key piece in our relay strategy as it allowed us to easily administrate the Kubernetes-based relays for World War Doh. Agones saved us precious time required for a custom inhouse counterpart and in addition, kept our implementation generic and available to run on top of multiple cloud providers."

Today, we’re announcing the availability of Game Servers beta, a managed service offering of Agones. Whereas Agones is ideal for managing regional game server clusters, Game Servers supercharges Agones to simplify managing global multi-cluster game server fleets. 

If you’re already running Agones in production workloads, you can opt into the managed service by simply registering Agones-managed game server clusters with the new Game Servers API. And you can opt out of the managed service at any time if you want to go back to manual management.

You can also group these clusters into a concept we call realms—logical groupings of Kubernetes clusters, designed around a game’s latency requirements. You can then define game server configurations and scaling policies to simplify fleet management across realms and the clusters within them, all while still maintaining control and visibility.

Game Servers can help you plan for a variety of scenarios. For example, you can choose to increase the reserved capacity of game servers for a planned game event, or for a specific date and time range. Additionally, you can automate scaling to account for daily peak and non-peak hours across different regions. Game Servers’ rollout flexibility also means that you can A/B test different game server configurations and canary test changes, rolling them back if necessary. 

In beta, Game Servers will initially support clusters running on Google Kubernetes Engine (GKE)  only and we are diligently working on hybrid and multi-cloud support for later this year. The second half of 2020 will also bring more advanced scaling policies, and a deeper integration with our open source matchmaking framework, Open Match. Learn more about how to get started with Game Servers here.

Game Servers is the latest solution in Google Cloud’s ongoing effort to help game developers remove complexity from infrastructure management. Companies like Activision Blizzard are benefiting from our highly reliable global network, advanced data analytics and artificial intelligence (AI) capabilities, and commitment to open source, to bring great gaming experiences to their players.

Join our Google for Games digital broadcast on Monday, March 23rd to hear from Google experts and leading gaming companies such as Improbable, Grenge, Colopl and Unity, who are using our technology to take their games to the next level. Learn more.

Google Cloud named a leader in the Forrester Wave for Public Cloud Development and Infrastructure Platforms 19 Mar 2020, 4:00 pm

Today, we’re announcing that Google Cloud has been named a leader in The Forrester Wave™ for Public Cloud Development and Infrastructure Platforms, Q1 2020. This report evaluated cloud providers’ infrastructure and application development capabilities—important considerations for enterprises turning to the cloud to support their business growth and drive innovation.

In this report, Forrester noted Google Cloud’s investment in global expansion and innovative development services. 

Infrastructure and global reach

Google Cloud’s footprint has expanded to 22 regions with additional regions coming soon in Delhi, Doha, Toronto and Melbourne, providing enterprises with low-latency, high-performance compute, networking, analytics and storage services, as well as in-country disaster recovery options in India, Canada and Australia. This growth has enabled us to introduce new capabilities that allow you to control where you put your data to support regulatory, security and compliance requirements. We’ve also committed to extending the size and reach of our sales and support teams so that customers can get personalized attention around the globe.

In the report, Forrester also gave Google Cloud the highest possible scores in the reliability, storage services, and security certifications criteria. 

Innovative development services

Forrester recognized in the report that “Google is best for customers who prioritize leading-edge AI/ML services and microservices/containers development,“ as well as Google Cloud’s popular CI/CD tools. 

Anthos, in particular, is a modern application platform that enables organizations to build, deploy and operate applications anywhere securely and consistently, while modernizing traditional applications for an increasingly hybrid and multi-cloud world. Anthos can manage workloads running on both on-prem and cloud environments, while reducing costs and improving developer velocity. 

Download the Forrester report

We’re proud that Forrester has recognized Google Cloud’s infrastructure and development capabilities. To learn more, please download The Forrester Wave™ for Public Cloud Development report here.


The Forrester Wave™: Public Cloud Development and Infrastructure Platforms, Q1 2020. The Forrester Wave™ is copyrighted by Forrester Research, Inc. Forrester and Forrester Wave™ are trademarks of Forrester Research, Inc. The Forrester Wave™ is a graphical representation of Forrester’s call on a market. Forrester does not endorse any vendor, product, or service depicted in the Forrester Wave™. Information is based on best available resources. Opinions reflect judgment at the time and are subject to change.

Leave a Reply

Your email address will not be published. Required fields are marked *

Top