Sunday, August 16, 2020

Pixel is a powerful, portal, personal pocketful of AI....

My son’s an AI lad. He has expertise in object recognition (currently best in world on fruit recognition, which can be used to increase yield). He’s also been involved in AI for Learning, as he’s coded the new version of WildFire, an AI-driven content creation service for learning. So he’s my go-to-guy for recommendations and swears by his Pixel smartphone from Google. As he says “it’s literally AI in your pocket”. For me, the Pixel is a little sandbox for consumer AI, so gives us insights into the way technology is moving and therefore the way online learning will move.

Not many products get better after you buy them, but that can be said for this smartphone. As a device it really does deserve to be called ‘smart’ as it uses in-device machine learning. The Pixel 4 uses Neural Core, a TPU chip with tons of on board AI features for everything from song recognition to computational photography. The Adaptive Battery feature even uses AI to predict when your battery will run out from your use patterns, and automatically reduce behind the scenes activity to lengthen battery time. The Pixel phones takes AI to a new level with:


Language & image recognition

Learner support 

Capture media




Speech to text has come of age and the Pixel automatically transcribes videos. Live caption will also handle podcasts and audio messages. You can record and export these transcripts and, as text is searchable, keyword triggers can also be set up. Note taking can be transformed if you use Google’s transcription service, during Zoom calls. 

You can start, save, and search recordings in the Recorder app using Google Assistant. Just say “Hey Google, start recording my voice” to start recording, or “Hey Google, find my voice recording about LXPs” to find that session you had recorded. The saved transcripts can also be easily exported to Google Docs, just choose a recording, tap “Transcript” to show the transcript, then tap the three dots menu on the top-right corner, and tap “Save text to Google Docs.” 

There are all sorts of NLP (Natural Language Processing) tricks you can pull off here and we already use this in WildFire to transcribe videos, going further by using AI to automatically generate powerful, online learning. We have also been using voice as input. Imagine online learning allowing open voice responses from a learner and their automatic, semantic interpretation so that feedback can be provided until you get things right, that’s exactly what we’ve done in WildFire.

Language and image recognition

Google Lens must be one of the best, but least used, AI features on smartphones. You simply point and shoot at a plant, tree, flower, animal, work of art, landmark, restaurant or product, and much more. Then good news is that an ‘education’ feature for Lens is in the works. You will be able to point your camera at an assignment or homework question and get instant help. The word is that Google will focus initially on maths and we’ve seen how Photomaths uses AI to problem solve and unpack the steps from question to answer in a mathematical problem. There is huge scope here for learner engagement, support and eventually online teaching with this line of development. 

In languages Lens already translates in real time, whether it be a foreign menu, or words on the page. You needed to be online in the past but it looks as though this will be possible offline. 

In some subjects, imagery may be important; biology, geography, architecture, art. Image recognition leading to relevant education links is already in there, with a purely educational mode it could be made more relevant to education.

But the real advantage comes with its text recognition, which springs off into interpretation, recommendations, transcription and translation. Most subject would benefit from help from limited test. What most people will use it for in an education or training context is its ability to take text from the real world; from a document, manual, whiteboard, book or business card… as it turns it into text on the phone, hat can be used in any way you want. It’s the links from the text that matter – links to a free educational service, person who can help, possible training course.

Learner support

The primary problem with most assistant interaction is that it is ‘single shot’. You ask for something and it responds – once. Google Assistant is, as expected, forging ahead with continued conversation or multi-turn dialogue. You say, “Okay, Google” Google Assistant will respond but also continue to listen for additional commands, to continue the dialogue until you say “Stop” or “Thank you” to end the conversation. This is a fiendishly difficult software problem to solve and needs AI to do it well.

This could be a big leap for learning, as you can take deeper dives into topics; linked to actions and sharing. What makes it easier is the transcription of your words on the screen to confirm that it has captured what you intended. You find that this all increases that sense of flow, of it being a conversation. This is the direction of travel for conversational interfaces and chatbots. True dialogue promises to provide more than just answers to questions, as it will also provide, at some point, real Socratic dialogue, in other words – teaching.

Capture media

Phones have largely replaced cameras for most consumer use. Taking pictures and videos for Facebook, Twitter, Instagram and TikTok (see why TikTok is relevant to on line learning), has become a core use of smartphones. Social media has migrated across media; from text, to text and images, to images only, to video and now to those media with the ability to create, filter, edit media. In Pixel phones you see this happen at a very sophisticated level. 

Want a sharp portrait, good picture at night, images of the Milky Way, get a good Zoomed image; Google kicks ass on computational photography. Using machine learning-based white balance and multiple exposures to fill out problems on an image, it turns you into an impressive photographer, and that’s where machine learning, on-device neural engines, and overall improvements in both hardware and software component performance, raise the game in photography. The Pixel literally uses AI as a creative force in photography. 

As video has become an important medium in lea ring, so smartphones like these combined with the sharing capability on online platforms, allows learning through video to happen with ease.


Basically, your smartphone is getting smarter, as it is now self-aware of where it is, not simply its GPS position, but where it is in relation to the world around itself. Google’s Soli, a motion sense chip, uses smart sensors and data analysis to detect how big something is, where it is and how close your phone is to that object. It shoots out electromagnetic waves and these waves bounce back to be interpreted by AI so that the position and objects can be recognised. It has a 180 degree view, better than the human eye, with only 120 degrees, concentrated in a much smaller arc, as most of it is peripheral vision.

This is crazy, but as you go to pick up your Pixel, it sees your hand, switches on the face recognition sensors, recognises you and unlocks your phone, as it recognises lots of face orientations, even upside down for unlocking secure payment… all in one motion. This is only one of a number of its applications for motion sense. And if you’re worried, as some were, about a phone that can be unlocked when you are sleeping, even dead, they have introduced a blink recognition system. 

Motion sense also delivers gesture recognition. This touchless future could be huge in the future, especially in a more Covid aware world. We have contactless payment and contactless interfaces are now here. A swipe of the hand for moving back and through songs, a pinch of the fingers for a button press, we could soon see an agreed language, like sign-language for interactions on lots of different devices; smartphones, laptops, with AR, within VR, control within cars. We gesture all the time, almost unconsciously pointing to imaginary watches when describing time and we’ve moved towards ever more transparent interfaces, with touchscreen, voice and now gestures. 

AI for Learning
As I explain in my book ‘AI for Learning’, it is the invisible hand and eye of AI that fueled this change. In learning, these frictionless interfaces provide interfaces that are easier to learn and use. They also reduce cognitive, leaving more bandwidth to learn. 

Your phone may also know, not just where you are, but what is around you, allowing the start of more sophisticated context reading for online job aids and learning. Suppose my phone knows what building I’m in, where I am in a building, close to an object, and also know what project I’m working on, it can have an educated guess as to what I’m likely to need in terms of push and pull nudges and support. This could be performance support on steroids where the whole move towards learning in the workflow is enriched by AI.


The smartphone has been astoundingly successful as a consumer and professional device. From its brick-like dimensions in the 1970s and 80s it quickly developed out of voice-only into text, photographs, then video. On interfaces, from buttons to touchscreens and is now a powerful computer that can do much of what a desktop computer can do and more. But the real leap is their AI capabilities, as they have AI embedded hardware for a lot more offline punch, as well as useful functionality. Your phone learns about you, personalises your experiences, knows where you are and now what’s around you. This all helps deliver the support you need to work, learn and improve your own expertise. We would be wise to look at the evolution of these devices as the evolution of how learners have and will interface with online learning. The main lesson is that the AI in every modern smartphone will be. In all online learning in the future.


Riajengenelen said...

Hello everyone, Are you into trading or just wish to give it a try, please becareful on the platform you choose to invest on and the manager you choose to manage your account because that’s where failure starts from be wise. After reading so much comment i had to give trading tips a try, I have to come to the conclusion that binary options pays massively but the masses has refused to show us the right way to earn That’s why I have to give trading tips the accolades because they have been so helpful to traders . For a free masterclass strategy kindly contact ( for a free masterclass strategy. He'll give you a free tutors on how you can earn and recover your losses in trading for free..or Whatsapp +1 562 384 7738

Williams Pater said...

Hello everyone i Am williams pater and i am from USA i am here to give my testimony about an herbal doctor called Dr,olu I was heartbroken because i had very small penis,not nice to satisfy a woman, i have been in so many relationship, but cut off because of my situation, i have used so many product which doctors prescribe for me, but could not offer me the help i searched for. i saw some few comments on the internet about this specialist called Dr,OLU and decided to email him on his email i saw on the internet,( ) so I decided to give his herbal product a try. i emailed him and he got back to me, he gave me some comforting words with his herbal product for Penis Enlargement, Within three weeks of me use it, i began to feel the enlargement, " and now it just 4 weeks of using his products my penis is about 8 inches longer, and i had to settle thing out with my ex girlfriend , i was surprised when she said that she is satisfied with my performance in bed and i now have a large penis.thanks to DR OLU for is herbal product. you can also reach him with emsil though is..number WHATASPP him today on this number [ +2348140654426 ]