You Smile You Lose using Javascript AI
Today, we’re going to laugh. These are difficult times, and the consequences of this virus are just a detail in the ocean of shit where I’m drowning. Instead of crying, I had the idea of a small project to forget my problems.
TLDR
I made a web app that will monitor your smile via an artificial intelligence using the webcam. I show you funny videos, if you smile you lose! It’s very funny, it feels good, it’s open source and it only uses web technologies!
Use 5 minutes of your time to have a laugh.

If it’s done, you must have laughed at least one or two videos, you must have.
Otherwise either you’re too strong or you have no soul.
You want to add a funny video? Did you see a bug? Is a feature missing? The project is open-source and I invite you to participate. I have the approval of merge request very easy!
If you want to know why and how I built this app, you’ll find exactly that in the rest of the article!
The idea
As I was telling you, the period is quite moldy. As a result, like anyone who’s a little depressed, I’m nonchalantly walking around on YouTube. I was looking for funny content to help me change my mind.
And that’s when I came across (once again) those famous You Laugh You Lose videos. The principle is simple : you put people in front of funny videos, if they laugh they lost.
And then I said to myself: “why not do the same thing but for the general public and in the browser?”.
I have everything I need. The videos would come from YouTube so no need to host them, manage streaming or manage a player. It would be a static site to simplify the hosting of the app. And most importantly, I already know how to detect a smile on a face.
I gave myself 2 days to code everything, host the project, make the article you’re reading in two languages and put the code in open-source on my GitHub. OK GO.
Smile detection
So, believe it or not, that was by far the easiest and fastest part. For several reasons.
- First reason : nowadays, expression detection via artificial intelligence models on the web is very easy. Anyone can do it and/or set it up.
- Second reason : I already did it in a previous project!
Remember ? When I did my previous bullshit with gifs.
So, if you want to know how this part works in particular, I invite you to read the dedicated article.
In a few words, I use the face-api library which manages the whole complex part for me. With the webcam I load the models when I launch the app. I just need to use the high level face-api API after that. I check twice a second if the user is smiling or not.
/**
* Load models from faceapi
* @async
*/
async function loadModels() {
await faceapi.nets.tinyFaceDetector.loadFromUri("https://www.smile-lose.com/models")
await faceapi.nets.faceExpressionNet.loadFromUri("https://www.smile-lose.com/models")
}
/**
* Setup the webcam stream for the user.
* On success, the stream of the webcam is set to the source of the HTML5 tag.
* On error, the error is logged and the process continue.
*/
function setupWebcam() {
navigator.mediaDevices
.getUserMedia({ video: true, audio: false })
.then(stream => {
webcam.srcObject = stream
if (isFirstRound) startFirstRound()
})
.catch(() => {
document.getElementById("smileStatus").textContent = "camera not found"
isUsingCamera = false
if (isFirstRound) startFirstRound()
})
}
/**
* Determine if the user is smiling or not by getting the most likely current expression
* using the facepi detection object. Build a array to iterate on each possibility and
* pick the most likely.
* @param {Object} expressions object of expressions
* @return {Boolean}
*/
function isSmiling(expressions) {
// filtering false positive
const maxValue = Math.max(
...Object.values(expressions).filter(value => value <= 1)
)
const expressionsKeys = Object.keys(expressions)
const mostLikely = expressionsKeys.filter(
expression => expressions[expression] === maxValue
)
if (mostLikely[0] && mostLikely[0] == 'happy')
return true
return false
}
/**
* Set an refresh interval where the faceapi will scan the face of the subject
* and return an object of the most likely expressions.
* Use this detection data to pick an expression and spread background gifs on divs.
* @async
*/
async function refreshState() {
setInterval(async() => {
const detections = await faceapi
.detectAllFaces(webcam, new faceapi.TinyFaceDetectorOptions())
.withFaceExpressions()
if (detections && detections[0] && detections[0].expressions) {
isUsingCamera = true
if (isSmiling(detections[0].expressions)) {
currentSmileStatus = true
document.getElementById("smileStatus").textContent = "YOU SMILE !"
} else {
document.getElementById("smileStatus").textContent = "not smiling"
}
}
}, 400)
}
You’ll find all the source code of the project in the GitHub!
Video management
As said before, no way I manage the hosting or streaming of the videos. I want the cost of hosting and using this project to be around 0. The fact that it’s a static site will help a lot here. Thanks S3 + Cloudflare 🙂
So I figured I’d use the YouTube player, YouTube videos and the YouTube API. Thanks YouTube. The problem is that I want to stay on my own site. So I have to use the embed version of the YouTube player.
No worries, YouTube offers a dedicated API for the embed player!

I’ve never used the YouTube API before and I must say it was very easy to understand and use.
/**
* Setup the youtube player using the official API
*/
function setupYoutubePlayer() {
player = new YT.Player('player', {
height: '100%',
width: '100%',
videoId: 'ewjkzE6X3BM',
playerVars: {
'controls': 0,
'rel': 0,
'showinfo': 0,
'modestbranding': 1,
'iv_load_policy': 3,
'disablekb': 1
},
events: { 'onStateChange': onPlayerStateChange }
})
}
/**
* We want to show the intermissions when a video is over.
* Listening to the event onPlayerStateChange of the youtube api.
*/
function onPlayerStateChange(event) {
// 0 means the video is over
if (event.data === 0) {
player.stopVideo()
showIntermission()
}
}
/**
* Entrypoint. This should be use once.
*/
function startFirstRound() {
isFirstRound = false
currentSmileStatus = false
document.getElementById("loading").style.display = 'none'
document.getElementById('intermission').className = 'fadeOut'
player.playVideo()
}
/**
* Showing the next video to the user.
* This should be only trigger but the click on next video.
*/
function showNextVideo(event) {
event.preventDefault()
document.getElementById('loading').style.display = 'block'
document.getElementById('result').style.display = 'none'
if (listOfVideoIds.length) {
const nextVideoId = extractRandomAvailableVideoId()
player.loadVideoById({ videoId: nextVideoId })
player.playVideo()
setTimeout(() => {
currentSmileStatus = false
document.getElementById('intermission').className = 'fadeOut'
}, 1000)
} else {
showCredit()
}
}
Finally, I manage the videos in a simple array of string (YouTube video id) declared at the very beginning of the application. Each time the user clicks to see another video I randomly pick one up. The id is then removed from the array and inserted as the source of the embedded YouTube player. Easy!
TODO
I did it very quickly.
As a result, a lot of things are missing in this app.
Do you want to help?
A lot of stuff need to be add here :
- score management
- management of other embedded players (dailymotion, vimeo, twitch)
- a skip button to cheat and go to the next video
- a less strict management of smile detection (several smiles before counting a real smile)
- detect that the user is no longer in the field of view of the camera (very easy to do)
- hide the display of YouTube cards at the end of some videos
If you’re interested in something in this list and you’re not afraid of Javascript: you’ll find the GitHub here! Again, I have the approval of MR easy so don’t hesitate.
Epilogue
End of the challenge. I had a good laugh, it felt good. I hope it will be the same for you. It’s the most I can do to help you in this endless day. In the meantime, I’ll see you next Monday!

