Machine learning (ML) have grown rapidly in recent years. While traditionally executed primarily on remote servers, ML is quickly becoming integral to modern web and mobile applications.
I’m a Senior Software Engineer at MEV with 13 years of experience in the industry. Recently, I had the opportunity to develop a web application utilizing machine learning. During this project, I uncovered a few tools and techniques that may help others go beyond classical programming to solve similar tasks, which I’ll cover below.
One of the ML tasks in our project was implementing real-time object detection to generate a bounding box that acts as a safety perimeter around a specific area in a video stream. If breached, a notification would be triggered to alert the team.
Because the device was required to comply with the HIPAA standard for processing and storing protected health information; wecausethe device was required to comply with the HIPAA standard for processing and storing protected health information, we immediately ruled out external AI services (e.g., ChatGPT, Gemini, etc.) that do not meet this standard. Moreover, these services incur additional costs.
Popular cloud platforms offer services for working with ML, such as Amazon SageMaker, Azure Machine Learning, and Google Vertex AI. These platforms provide a wide range of tools for creating, training, and deploying ML models.
However, the main disadvantages of these systems are:
In addition to the disadvantages noted above, additional performance issues occur when using a server without a GPU or FaaS (e.g., AWS Lambda). Moreover, this route would require developing an API with all necessary security measures.
Since we were dealing with WebOS, we had the opportunity to place the ML model directly on the device, develop a native service in C/C++ for inference and necessary data processing, and use it in the web application.
This approach has several advantages:
In our case, the main obstacle to using this approach was the additional time required to learn and apply several new technologies, which we did not have due to release deadlines.
In this case, we get the same advantages as in the previous point (except for the direct use of SoC capabilities) and the speed of integrating ML functionality into the existing web application.
When searching for libraries to work with ML in web applications, TensorFlow.js emerged as the most attractive option.
TensorFlow.js is a JavaScript library developed by Google that allows developers to create, train, and execute machine learning models directly in the browser or on a server using Node.js. This library is part of the broader TensorFlow ecosystem.
An additional advantage of using the TensorFlow ecosystem is the availability of tools for converting models to the TensorFlow.js and TensorFlow Lite formats (notably, TensorFlow Lite models can be used in web applications in addition to mobile devices). However, it's important to note that not all functions available in TensorFlow can be successfully converted to another format. Therefore, it is necessary to check for compatibility and make appropriate changes or choose another model.
TensorFlow.js allows developers to use a single programming language for both model training and execution. This presents an advantage because web developers can potentially develop and maintain the product without involving ML engineers. However, the reality is that machine learning is a niche field, and not every web developer has the expertise to leverage it. Therefore, involving qualified ML engineers is generally unavoidable.
That said, because the most popular machine learning tools and libraries predominantly use Python, finding an ML engineer with sufficient JavaScript expertise and TensorFlow.js can be difficult. Although TensorFlow.js API and TensorFlow API (Python) have similarities, understanding the differences and finding equivalent functions may require additional time and effort.
Another reason to avoid using TensorFlow.js for model training is the need for more ready-made examples in JavaScript. In these cases, using an existing solution rather than writing code from scratch is much faster and more convenient. However, this advantage only applies to the model training stage. When it comes to deployment, the differences between Python and JavaScript code become an issue again.
Since model data often requires preprocessing and post-processing, the amount of code that needs to be converted from Python to JavaScript can be quite substantial. This requires developers proficient in all relevant technologies (ML, Python, JavaScript, TensorFlow, TensorFlow.js, etc.). Therefore, it may be more practical to spend time training an ML engineer in JavaScript technologies rather than teaching a web developer ML.
The TensorFlow ecosystem provides powerful tools for training and executing ML models on virtually any platform, offering extensive opportunities to apply machine learning to various use cases.
Compared to other libraries, TensorFlow.js's broad capabilities and enhanced performance make it the ideal library for machine learning web applications.
However, Tensorflow.js is a relatively young technology, so ready-made solutions for specific tasks may be limited and hard to find. Additionally, despite their similarities, TensorFlow and TensorFlow.js have enough differences to create additional development challenges, which is something to consider when forming development teams and timelines.
Is TensorFlow right for your next ML project? Chat with one of our experts for answers and insights.
We use cookies to bring best personalized experience for you. Check our Privacy Policy to learn more about how we process your personal data
Accept allPrivacy is important to us, so you have the option of disabling certain types of storage that may not be necessary for the basic functioning of the website. Blocking categories may impact your experience on the website. More information