You heard all the potentials that Machine Learning (ML) is capable. Detecting fraud, predicting machine failure or understanding customer behavior. ML delivers tremendous impact to a variety of businesses.
During mimacom's 20th anniversary event, we organized a Hackathon in which all mimacom software engineers participated. The goal of the Hackathon was to elaborate and create use-cases, features, and applications, which are potentially useful for our customers. That's why all contributions were focused on business applicability. It was also an opportunity to improve our skills by tinkering with new technologies.
In my opinion, the biggest problem with ML right now is, that it mostly lives in the world of academia and commercial research groups. For us, this was a good opportunity to compare AWS SageMaker, Azure ML and GCP ML based on a defined idea. We were curious how mature the fully managed services are compared to each other and the ability to build, train, and deploy machine learning models quickly.
Our idea was to implement a pre-trained ML-model to geo-locate images by applying a location label (i.e. name of the landmark or latitude/longitude). Modern mobile devices can automatically assign geo-coordinates to images when pictures are taken with them. However, most images on the web still lack this location metadata. This use-case is quite challenging for beginners and covers a difficult high-level computer vision problem. There is quite an amount of scientific papers and articles out there. See project im2gps for further information.
So, do we now provide a comparison in this post? No, we failed miserably even before we started with the comparison due to a lot of mistakes and bad assumptions. Instead, we summarize our lessons learned and give you a small insight what has to be considered before starting a ML-project.
Lesson 1: Focus on Quick Wins
Our ML project failed due to inflated expectations of what ML can do. Before starting a ML project, identify the goals of the project. It’s important to define a simple, realistic and time-boxed use-case, specify measurable success criteria's and regularly benchmark the current performance. The SMART-method is ideal for this purpose. This concludes following terms:
- S - Specific
- M - Measurable
- A - Achievable
- R - Reasonable
- T - Time Bound
Lesson 2: Think about the Data before You Get Started
I can’t emphasize this enough: If you put garbage in, you'll get garbage out. Of course, you need a lot of data in order to be successful, but it has to be in the right amount and quality.
While hacking our project, we realized that we got too much data in terms of amount and noise to finish our project on time. We decided to modify our model to recognize pictures which are taken from a particular city (in that case: Stuttgart) which shrank the amount of data significantly. But it was too less to create an accurate model. We didn't take the time up front to deep dive into the yfcc100m-dataset and create a small chunk of data which enables an accurate image recognition. In the end, this mistake was crucial for our failure.
If you're in an inception phase of your ML project, ask yourself the following questions:
- What type of data is needed?
- How much data is needed?
- Do we have that data?
I highly recommend to create a small chunk of labeled data to verify the feasibility of your project. You may increase the amount of data in later iterations.
Lesson 3: Sharing
In the Hackathon, the closer we got to the deadline, the more we got frustrated. Each team member worked isolated to reach our unreachable target. This was also a reason why we failed. Information or knowledge won’t provide much value if it isn’t communicated to the rest of the team.
Sharing the knowledge is the cornerstone of effective collaboration, especially when the team members are not experts in ML. It gives a group a frame of reference, allows to interpret situations and decisions correctly, helps people understand one another better, and greatly increases efficiency.
We are engineers with almost no ML experience, so we communicated on the same level. In case of a team structure with a broad and balanced skill set to cover ML projects from end-to-end, knowledge sharing is extremely crucial.
Lesson 4: Reliable Infrastructure
The important question is whether to rely on Cloud Services such as AWS SageMaker, Azure ML and GCP ML or to build your own toolset for ML on your own machine.
In my opinion it depends. All tested ML Cloud Services are mature enough and are mostly equal in terms of features. There is also a vast amount of resources available to help you set up quickly. I would recommend doing the first steps in a green field on your local machine, merge the results and knowledge in your team and then consider using a Cloud Service.
ML projects requires many considerations such as the amount of time to implement a feature, the level of difficulty to apply ML algorithms and the business value. Sometimes, everything happens as planned. Mostly, reality refuses to comply with our roadmap and we recognize that initial estimates didn’t work out. Instead of starting/continuing a project that’s difficult to implement, you should stop it and focus on the quick wins.