An Executive’s Guide To Understanding Cloud-based Machine Learning Services
Machine learning platforms are one of the fastest growing services of the public cloud. Unlike other cloud-based services, ML and AI platforms are available through diverse delivery models such as cognitive computing, automated machine learning, ML model management, ML model serving, and GPU-based computing.
This article attempts to explain the terminology and delivery models adopted by public cloud providers. It aims to help business decision makers choose the right cloud-based ML and AI service.
Like the original cloud delivery models of IaaS, PaaS, and SaaS, ML and AI spectrum span infrastructure, platform, and high-level services exposed as APIs.
Let’s take a closer look at each of these layers.
Cognitive computing is delivered as a set of APIs that offer computer vision, natural language processing (NLP) and speech services. Developers can consume these APIs like any other web service or REST API. Developers are not expected to know intricate details of machine learning algorithms or data processing pipelines to take advantage of these services.
As the consumption of these services rises, the quality of cognitive services increases. With the increase in data and usage of the services, cloud providers are continually improving the accuracy of the predictions.
The more recent addition to cognitive computing is automated machine learning (AutoML) service where developers can use the APIs after training the service with custom data. AutoML offers a middle ground to consuming pre-trained models vs. training custom models from scratch.
If you are considering adding necessary AI capabilities to existing or new applications, ask your developers to evaluate cognitive services in the public cloud. From object detection to sentiment analysis, you will be able to tap into readily available AI services. Think of these APIs the SaaS equivalent of AI where you only pay for what you use.
ML platform as a service
When cognitive APIs fall short of requirements, you can leverage ML PaaS to build highly customized machine learning models.
For example, while a cognitive API may be able to identify the vehicle as a car, it may not be able to classify the car based on the make and model. Assuming you have a large dataset of cars labeled with the make and model, your data science team can rely on ML PaaS to train and deploy a custom model that’s tailor-made for the business scenario.
Similar to PaaS delivery model where developers bring their code and host it at scale, ML PaaS expects data scientists to bring their own dataset and code that can train a model against custom data. They will be spared from provisioning compute, storage and networking environments to run complex machine learning jobs. Data scientists are expected to create and test the code with a smaller dataset in their local environments before running it as a job in the public cloud platform.
ML PaaS removes the friction involved in setting up and configuring data science environments. It provides pre-configured environments that can be used by data scientists to train, tune, and host the model. ML PaaS efficiently handles the lifecycle of a machine learning model by providing tools from data preparation phase to model hosting. They come with popular tools such as Jupyter Notebooks which are familiar to the data scientists. ML PaaS tackles the complexity involved in running the training jobs on a cluster of computers. They abstract the underpinnings through simple Python or R API for the data scientists.
If your business wants to bring agility into machine learning model development and deployment, consider ML PaaS. It combines the proven technique of CI/CD with ML model management.
ML infrastructure services
Think of ML infrastructure as the IaaS of the machine learning stack. Cloud providers offer raw VMs backed by high-end CPUs and accelerators such as graphics processing unit (GPU) and field programmable gate array (FPGA).
Developers and data scientists that need access to raw compute power to turn to ML infrastructure. They rely on DevOps teams to provision and configure required environments. The workflow is no different from setting up a testbed for web or mobile application development based on VMs. From choosing the number cores of the CPU to installing a specific version of Python, DevOps teams own end-to-end configuration.
For complex deep learning projects that heavily rely on niche toolkits and libraries, organizations choose ML infrastructure. They get ultimate control of the hardware and software configuration which may not be available from ML PaaS offerings.
Recent hardware investments from Amazon, Google, Microsoft, and Facebook, made ML infrastructure cheaper and efficient. Cloud providers are now offering custom hardware that’s highly optimized for running ML workloads in the cloud. Google’s TPU and Microsoft’s FPGA offerings are examples of custom hardware accelerators exclusively meant for ML jobs. When combined with the recent computing trends such as Kubernetes, ML infrastructure becomes an attractive choice for enterprises.
With ML becoming a significant workload, public cloud providers are investing in core infrastructure, platforms and services to attract enterprise customers.
Continue learning how to improve your digital business:
This blog is provided for informational purposes only and may require additional research and substantiation by the end user. In addition, the information is provided “as is” without any warranty or condition of any kind, either express or implied. Use of this information is at the end user’s own risk. CenturyLink does not warrant that the information will meet the end user’s requirements or that the implementation or usage of this information will result in the desired outcome of the end user.