In the era of the Internet of Things (IoT), conventional cloud-based solutions struggle to handle the huge amount, high velocity, and heterogeneity of data generated at the network edge. In this context, the edge-to-cloud compute continuum has …
This post shows how to build an unsupervised deep learning model for digit generation by leveraging a convolutional variational autoencoder trained on the MNIST dataset of handwritten digits using Keras+Tensorflow.
In this post I show how to leverage BERT, a transformer-based language representation model, in order to identify the personality type of users based on their writing style and the content of their posts, according to the Myers-Briggs indicator (MBTI).
In what follows, I'll show how to fine-tune a BERT classifier, using Huggingface and Keras+Tensorflow, for dealing with two different text classification problems. The first consists in detecting the sentiment (*negative* or *positive*) of a movie review, while the second is related to the classification of a comment based on different types of toxicity, such as *toxic*, *severe toxic*, *obscene*, *threat*, *insult* and *identity hate*.
This post is dedicated to developing an artificial intelligence application capable of identifying the emotions expressed through the voice in spoken language. The classification model focuses on seven different emotions (*anger*, *boredom*, *disgust*, *fear*, *happiness*, *sadness*, *neutral*) and is enhanced with the attention mechanism.
In what follows, I'll show how to build a dog breed classifier based on Convolutional Neural Networks, which focuses on two particular breeds: Chihuahua and Pug. In order to cope with the small amount of traning data, the model exploits three main techniques: real time data augmentation, Transfer Learning and fine tuning.