Now that Summer is officially here; it's fair to say that things are seriously heating up in the cloud environment, particularly from a GCP perspective. Not only do we have a host of new features and solutions coming our way this month, but we've also got the biggest Google event of the year looming at the end of the month - Cloud Next 18!
We'll be attending Cloud Next 18 to make sure that we have more of the latest updates to give you - so stay tuned. In the meantime, here's what Google are introducing this July!
Finland Gets Its Own Google Platform Region
One of the things that make the GCP so competitive is its global presence. This month, that presence spread even further with the arrival of the Finland Google Cloud Platform region. The "Europe-North1" region is now open, which means that people can choose Finland as their preferred GCP region for uploading workloads and storing data.
For those interested in storing data closer to home, the Finish region comes with everything you'll need to build your next application or maintain your data. According to Google, storing data in the new region can boost latencies by up to 65% for end-users in the Nordics, and up to 88% for users in Eastern Europe.
Introducing Support for Node.JS in App Engine
It's safe to say that modern developers have a soft spot for Google App Engine, and it's zero-configuration deployment capabilities. Since the GCP is all about helping their clients to be as productive as possible by giving the most popular programming languages available, the company recently announced that you can now deploy Node.JS 8 applications into the App Engine standard environment. The result of this new support is:
- Automatic scaling and faster deployments
- Idiomatic developer experiences
- Strong Security
- GCP access in your Node.JS application
Attach GPUs to Your Preemptible VM Instances
In the "Compute" section of the GCP portfolio, there has been a number of updates this month, including the new ability to attach your GPUs to various preemptable VM instances. A preemptible VM is simply an instance that you can create and manage at a far lower price than a typical instance. Of course, the compute engine might terminate those instances if the resources used are required for other tasks.
On the plus side, if your applications are capable of withstanding possible preemptions, then these instances can reduce your Compute Engine costs by a significant amount. Your preemptable instances will function just like a standard GPU for the life of an instance, and it will cost you a fraction of what you would pay for a standard on-demand GPU.
3 New Regions for Cloud Functionality
As mentioned above, Google is all about expansion and growth. The Cloud Functions solution is a regional system, which means that your cloud functions run in a specific location managed by Google. This allows for a greater redundancy across all the zones within a specific region. In most cases, selecting the right zone to run your Cloud Functions in will require you to carefully consider things like availability and latency. Selecting the closest region to you can improve your performance significantly.
Today, three new regions have been introduced in Cloud Functions, at a beta level. The regions are:
Update to the Compute Engine Trusted Image Policy
The Trusted Images policy on Google Compute engine gives organizations the opportunity to create images that meet with highly specific security and quality guidelines. This ensures that everyone shares and uses images safely.
In most cloud instances, users in a project will be able to create disks and copy images using default public images. The new policy ensures that you can restrict project members and maintain a higher quality and security in your Google cloud projects.
Introducing Sole Tenant Notes
In a Google Cloud VM structure, typical VM instances will run on physical hosts shared by plenty of different clients. Now, Google is offering customers the chance to access sole-tenant nodes instead, which will allow them to enjoy the added security that comes with having a server and VM instance purely dedicated to their needs.
Sole-tenant nodes are essentially physical Compute Engine servers that host VM instances specifically for your project. This can help you to keep your instances separated from other projects.
Managed SSL Certifications for Custom Domains
Google also introduced a new security update this month, with managed SSL certification for custom domains. The Managed SSL support strategy allows companies to automatically provision SSL certificates for App Engine applications. Your certificates also renew automatically before they expire. This goes above and beyond the standard experience for the SSL certificate, by giving you globally-distributed SSL endpoints and security for a worldwide audience.
Once your custom domain is mapped against your application and your DNS records are configured, the App Engine provisions managed certificates for you, and revokes the certificate when you remove or delete your custom domain.
On the storage and databases side of the Google Cloud Platform, Google is issuing "Cloud Spanner" - a system that allows users to construct their own "STRUCT" objects from their gathered data and use those objects as boundary parameters when running SQL queries.
Configuring Audit Logs in GCP Console
Thanks to a new beta release from the Google Cloud Platform "Management Tools" segment, users can begin to configure their data access audit logs directly on the GCP console, without any need to edit identity and access policies.
Services: You can specify the services that you want to receive audit logs from - for instance, you might choose to receive audit logs from Compute Engine or Cloud SQL.
Default configurations: You can specify default data access audit log configurations in an organization, project, folder, or anything else that applies to future GCP services that produce data access audit logs.
Projects: You can choose to configure Data Access audit logs for individual projects.
Folders: You can configure data access audit logs in folders, which will apply to all the new and existing projects in the folder.
Organizations: you can access and configure Data audit logs across an entire organization.
QUIC Support for HTTP(S) Load Balancing
The QUIC Support for HTTP(S) load balancing update is now generally available for Google cloud platform users. QUIC is something that Google has been using for four years now. This UDP-based encrypted transport protocol is designed specifically to support HTTPs and deliver traffic for products all the way from YouTube to Google web search. Anyone reading this article on Google Chrome is probably already using QUIC.
QUIC is intended to make the web faster and reduce slow connections, and now cloud users can enjoy that same level of speed. Google is the first public cloud to support QUIC for HTTP(S) load balancers.
Range of Cloud Interconnect Options
To give users more control over their cloud experience, Google has also introduced a range of "Cloud Interconnect" options, which allows you to connect your on-premise infrastructure to the GCP cloud. This is obviously a huge step forward for anyone who wants to use a hybrid strategy to connect to the cloud. Interconnect options include:
Dedicated Interconnect - this solution allows for the connection of an on-premise network to a GCP VPC; however, you must meet with Google at a pre-set support location.
Partner Interconnect: You can extend your data centre network to your Cloud projects through your preferred service provider. This allows you to add connectivity from your on-premise network to your GCP VPC through a service provider partner.
IPSec VPN - With the Google Cloud VPN you can connect your on-premise network to your VPC network through an IPSec connection.
Direct peering: you can access Google Cloud properties through direct peering if you meet with Google's peering requirements
Carrier peering: if you don't meet with Google's peering requirements, you can connect through a peering partner.
BigQuery and Apache Parquet / Apache ORC Ingestion
To help companies on the quest to make more of their big data, Google has now introduced native support for loading Parquet data from Cloud Storage into the BigQuery environment. You can access native support to load your ORC files into BigQuery too - at a beta level.
For those who aren't sure, Parquet is the open-source data format widely used by the Apache Hadoop ecosystem. When you access Parquet data in Cloud storage, you can load your data into a partition or data and append to overwrite or adapt an existing table or partition.
Cloud Dataflow's New Streaming Engine
Another update for the data lovers, Cloud Dataflow, will be introducing a streaming engine this month, which moves the time window and shuffle storage out of virtual machines and into the Cloud Dataflow backend service. The benefits of using this system spread all the way from better responsive autoscaling in a streaming pipeline, to enhanced supportability.
Up until now, the cloud dataflow pipeline runner has been executing streaming pipelines entirely on worker virtual machines - which means that you consume a lot of worker CPU and memory power. The streaming engine helps to overcome this problem. The streaming engine:
Reduces the CPU, memory and persistent disk storage needs for your worker VMs
Improves supportability, since applying service updates doesn't need to mean redeploying pipelines.
Provides a smoother and more granular worker scaling process.
Custom Service Accounts for Cloud Dataflow
The third big update for Cloud Dataflow this month comes in the form of new custom service accounts designed to help companies enhance the security of their cloud systems. Cloud Dataflow users will be able to specify custom service accounts, rather than using a default service account.
Generally, when you run your pipeline, the Cloud Dataflow system uses two different service accounts to manage your permissions and security. The two accounts available are the controller service account and the Cloud dataflow account. The new service accounts solution will help to offer controlled access at a more granular level for security-conscious companies.
Cloud Pub/Sub Alerts for Cloud Source Repositories
On the developer side of things, we have an update to the Pub/Sub notifications you can access for your Cloud Source repositories. The latest beta release allows users to add their cloud Pub/Sub notifications to the Cloud Source repository project, or repositories that use the G-Cloud command-line tool. You can simply type the following command to get started:
Gcloud Beta Source Repos Update Hello-World --add-topic=csr-test
Cloud Source Repositories Private Security Key Detection
Last but not least, Google has released a beta "pushblock" for Cloud Source repositories, designed to stop users from storing their security keys in the repository system. The strategy works by checking the GCP service account credentials and PEM-encoded private keys.
Since Google is all about security and privacy, they want to remind their users that they should never be storing their security keys in version control systems. This new update will help prevent this by checking automatically for:
- RSA, DSA and PGP keys
- JSON format Google Platform account credentials
Stay Up to Date at Google Cloud Next 18!
There you have it! Everything you need to know about the latest changes to the Google Cloud Platform in July. Of course, with Google Cloud Next 18 set for the end of the month, there's bound to be even more exciting updates to come. Stay tuned with Coolhead Tech or come and meet us at the Next event to make sure that you're always ahead of the curve when it comes to Google Cloud technology.
HEY! If you are in Austin, Texas, come Rock the G Suite at us for the Diane Greene's keynote, local sessions, lunch and fun!
Register for NEXTJAM in Austin on July 24 to Watch the Google Cloud Next 2018 Keynotes and participate in Local sessions.