Why Google Has Coolest Data Centre Ever?
In present time Google is most innovative company around. From Google maps to self-driving cars Google is using AI everywhere. Since Google host large people on the internet, it maintains one of the most mission-critical infrastructures to back its services like search, gmail etc. Google has data centres, all over the world. From starting days Google understood the importance of infrastructure for better service. Google has implemented its own customized hardware. Google host one of the biggest internal network on the planet, in fact, Google is 3rd biggest ISP provider in the world. Many of its engineers have shared the lessons on operating such a large infrastructure with the world. One of the most innovative ideas was to use the AI in data centre.
You can gain more detailed information on AI through Intellipaat’s artificial intelligence course. Intellipaat is the global leader in online professional training with self-paced & instructor-led courses in 150 most trending tools and technologies like big data, data science, AI, blockchain course and many more.
Google is using AI to improve the efficiency of the data center. One biggest place Google finds the application of AI was in the cooling system of the data center. The cooling system is the most expensive part of the data center. That is the reason why a lot of companies opens their data center at places like Norway, Greenland etc. cooling system alone can take up to 40% of energy consumption of the whole data center. Google solved this problem through AI. Inside data center temperature has to be maintained in between 21-24 degree Celsius for best efficiency of the data center.
Google is using IoT sensors to maintain the temperature automatically. Other parameters like fan speed, the temperature of a machine are also used to predict if a particular part can fail in the near future. This proactive approach is very useful for mission-critical systems like a data center. Google has created an AI-based recommendation system which takes a snapshot of the data center at a regular interval and fed these details into a neural network. This neural network uses Google’s DeepMind AI to learn new ways to improve the efficiency of the data center. There are more than 120 parameter variables like fan speed, the temperature of power IC etc. are used by this recommendation system to predict the decisions that can help to improve the efficiency of a data center.
Google is also continuously innovating in space of infrastructure monitoring as well. Google has documented the tools, technologies and practice around data centre operations a decade ago. In year of 2003 Google come up with “Site Reliability Engineering”. A “Site Reliability Engineer” is a professional who takes a lead into data centre operations. It is very near to DevOps engineer but not similar. A site reliability engineer has a more specific role and practices than devops.
DevOps is a loosely defined term which works differently for companies working in different domains. Since Google host some of the most popular services on the internet. It has massive infrastructure all over the world. To maintain a highly available and fault tolerant service engineers at Google follow a strict set of practices. While maintaining the benchmark of high availability and fault tolerance services at Google engineers have innovated many new tools and technologies. Kubernetes is built by engineers at Google from load balancing. It’s an open source tool for load balancing of servers. Kubernetes are used by many companies around the world now.
By applying engineering ideas to problems. Google has created a benchmark for highly available service and reduced its electricity bill by 40%. It is a big achievement for a big data center like Google. When Google claimed that it successfully reduced the electricity bill by 40%, nobody actually believed them. Google has to come up with a video to show how internally it operates their data center.
Many small companies are also following these philosophies to improve the uptime of their infrastructure. With the emergence of cloud, other companies are able to emulate the lessons of “Site Reliability Engineering” and DevOps tools like Kubernetes. Google has always maintained the standard of High availability and fault tolerance. Many IT engineers are taking training to learn these tools. Intellipaat offers training to learn various tools and technologies used in DevOps and cloud computing. These trainings are designed by experienced professionals, which proved to be helpful for many professionals to learn advance patterns and practices of DevOps and cloud.
Vaishnavi Agrawal loves pursuing excellence through writing and have a passion for technology. She has successfully managed and run personal technology magazines and websites. She currently writes for intellipaat.com, a global training company that provides e-learning and professional certification training. The courses offered by Intellipaat address the unique needs of working professionals. She is based out of Bangalore and has an experience of 5 years in the field of content writing and blogging. Her work has been published on various sites related to Big Data Online Training, Business Intelligence, Project Management, Cloud Computing, IT, SAP, Project Management and more.