Cognitive Services + Xamarin.Forms = Smart Applications. Today we are seeing how artificial intelligence (AI) can be used to improve the experience offered to users from different contexts.
Many of you have asked me what is the next level after learning how to develop mobile applications with Xamarin. To summarize, this is the next level. It is time to provide smart services or applications adapted to the availability, security and scalability provided by IA.
This article will only be an introduction to a series of chapters where we will detail each service and its implementation within Xamarin.Forms. I know that many of you, the newest, believe that this section is for people who have a lot of AI knowledge but the truth is that Azure and Cognitive Service do all the work for you. Don’t worry about it.
If you work in a development team, you will only have to keep in mind how to use the service to take advantage of it.
Before we can get into the subject we need to know where this approach to mobile development is headed, and how we can get more benefit from it. As I mentioned in one of my previous articles, you can see it here, today we are experiencing a very accelerated technological growth. More and more it becomes necessary to be connected to the cloud of information provided by the Internet. Technology makes our lives much easier in that sense. We can even have a personal assistant inside our mobiles, and this is thanks to AI.
According to statistics, 80% of users who access to the Internet are mobile users. This is because there are more than 5 million smart devices in the world. To make matters worse, this translates to more than 26 billion applications downloaded between official application stores.
There is an e-book that published PacktPub called “Skill Up 2018” where they mention the technologies that most attract the attention of developers today which are Machine Learning and Big Data. Big Data is the science that analyzes large amounts of data and Machine Learning is science that gives systems the ability to learn without the need to be programmed.
In order to understand the relationship in this context, Machine Learning feeds on large volumes of data in order to identify patterns and offer solutions. But this is not the most curious, the funny thing here is: If 80% of users are mobile, where do you think large companies get their data? Exactly, of the mobiles.
There is a consulting and information technology researcher firm called Gartner. This showed some very interesting statistics that say that by 2022 70% of the interactions with systems in companies will be on mobile devices. This is because the way in which the human being communicates is changing with the mobility.
Everything evolves, it is part of us also evolve. If you are still not in mobile development, you are still on time.
Artificial intelligence is a set of technologies that allow ‘machine intelligence’ to simulate or augment elements of human thought.
There is a very subtle but important aspect to this definition. We are not advocating that the intelligence of the machine “replace” human intelligence. But, the intelligence of the machine can be effectively used to “increase” or improve the human experience, achieved via a system of more specialized use cases that are assigned to individual elements of human thought.
The challenge of AI is to make the machines capable of conjecturing, on a scale and with a level of precision that we as humans can not reach. The machines “learn” while processing specially designed and collected data for their training. Artificial intelligence is based on the probabilistic method, according to which machines are oriented towards what they perceive as the most probable reality.
And this reality changes constantly. By processing more and more data, machines continually adjust their reality model, intuiting what they are “seeing”, “hearing” or “interpreting”. A smart assistant – like Alexa on Amazon, Siri on Apple, Xperia Agent on Sony, Bixby on Samsung – listen to what you say, analyze how to interpret it and, finally, offer you an answer. Will it be the right one? Maybe not, but they learn: if they make mistakes, they adjust and improve.
Microsoft is not new in the artificial intelligence world. Over the years, they has made many investments in artificial intelligence, and now has a very solid portfolio of technologies that they have built, that we can use, to create our own artificial intelligence.
This set of technology ranges from data management, data analysis, computing infrastructure, to API services driven by AI, automatic learning as a service and neural networks. In this article we will focus on one of the technologies related to AI, Cognitive Services.
With Cognitive Services we can incorporate intelligent algorithms into our applications that allow us to see, hear, speak, understand and interpret the needs of users with natural forms of communication. Azure provides us with an API which we can easily interact with Cognitive Services to transform our applications.
You may wonder how it works or how you can trust this. Because Cognitive Services APIs leverage the power of machine learning, we can integrate advanced intelligence into our applications without the need for a team of data scientists.
Microsoft Cognitive Services
Microsoft Cognitive Services is a set of APIs, SDKs and services available to developers to make their applications more intelligent by adding features such as facial recognition, speech recognition, language comprehension, etc.
Cognitive Services is based on cognitive computing which is a branch of artificial intelligence capable of understanding and emulating human thought processes, through self learning systems with machine learning for pattern recognition and natural language processing. All in order to imitate the functioning of the human brain.
Some advantages of cognitive computing are:
- Natural and contextual interaction with users.
- Smarter applications that give more value to the user.
- Novel business models through the knowledge of data.
Microsoft Cognitive Services offers a range of API cognitive services that take advantage of the power of machine learning so that we can easily incorporate advanced intelligence into our products.
Let’s see some of these below:
- Recognition of Emotions:
Detect, analyze, organize and generate tags for faces in photographs.
- Recognition of Emotions:
Customize user experiences through the recognition of emotions.
- Content moderator:
Moderate text, images and video, with human assessment possibilities.
Analysis, editing and processing of video in applications.
- Computer vision:
Analysis, editing and processing of video in applications.
- Bing voice API:
Service to convert speech into text and vice versa to understand its intention.
- Personalized speaking service:
Tuning the service to recognize any person anywhere.
- Speaker recognition:
Use speech to identify and authenticate independent speakers.
- Language understanding service:
Teach the applications to understand the commands of the users.
- Text analysis:
Evaluation of feelings and topics to understand what users want.
- Web Scale Language Model:
Predictive language models trained with data from the entire web.
- Bing spell check:
Detect and correct spelling errors in applications.
- Linguistic analysis:
Simplification of complex language contexts.
Voice and text translation with a call to a REST api.
Predict what items a client might want.
- Academic knowledge:
Relationships between academic documents, publications and authors.
- Entity link:
Extend the knowledge of people, locations and events.
- Discovery of knowledge:
Add searches on structured information for applications.
- Question and Answer Creator:
Distill information to easy-to-navigate answers.
- BING autocomplete:
Provide intelligent auto complete for your applications.
- Search of images:
Link meta data with search on images.
- Search for news:
Link users with robust searches and timely news.
- Search for videos:
Trending videos, detailed metadata and rich results.
- Web Search:
Advanced search for applications.
What you need to know
Before starting, if you are new to this, it is good to know that each service is handled differently. Because of that we will create a series where we will see the implementation of some of these services. It is also good that you feel comfortable consuming REST services.
If you do not know how to consume RESTful services inside Xamarin you can go to the official documentation. The documentation is very complete. It is good that at least you understand the context of the following section, that you know how and for which the commented lines are used.
If you know this part, the other is a piece of cake. We just need to know how to use certain APIs of each platform (Xamarin.Android / Xamarin.iOS) to access certain necessary components – eg: access the camera to take pictures and then send them if we work with Face Recognition or a similar service.
In order to use these APIs it is good to understand how the DependencyService works. With DependencyService we can call the specific or native functionalities of each platform from our Xamarin.Forms App. In the same way you can see the official documentation.
Example of a DependencyService implementation below:
It is good that you know that many of the most used APIs or services already come within Xamarin.Essential or it is very likely that another person has already made a plugin that handles the functionality you want to use. In the same way, do not forget to keep an eye on DependencyService.
Activate push notifications and subscribe to receive new content faster. In the same way I leave some resources that will be very helpful to start creating your smart applications.
- Read the official documentation Cognitive Services and Xamrin.Forms
- See the great Microsoft AI School tutorial
- Get certified for free at MVA (Soon migrated to Microsoft Learn)
- Learn from the best at Xamarin University (Soon migrated to Microsoft Learn)
This is all for now, see you in the next one!
Thank you somuch for reading this post!