I’m not a big fan of embedding YouTube videos, it adds a lot of weight to the page and I have limited control over it. It pulls in all kinds of styles, scripts and fonts from multiple domains. YouTube has a nocookie embed option, which is suppose to be privacy-friendly, but who knows.
So if not YouTube — what then? Vimeo is one option, it costs a few coffees per month. But it has some of the same problems as YouTube, as it also uses an iframe.
So I tried some other solutions.
Table of contents
Video.js
For all my solutions I used Video.js. To conditionally load it I added the following to the head partial:
{{- if .HasShortcode "video" -}}
<link rel="stylesheet" href="{{ (resources.Get "assets/video-js.min.css" | fingerprint).Permalink }}" />
<link rel="preconnect" href="{{ $.Site.Params.Video.fileBaseUrl }}" crossorigin>
<link rel="dns-prefetch" href="{{ $.Site.Params.Video.fileBaseUrl }}">
{{ end -}}
Params.Video.fileBaseUrl
.
And to my footer partial:
{{- if .HasShortcode "video" -}}
<script src="{{ (resources.Get "assets/video.min.js" | fingerprint).Permalink }}"></script>
{{ end -}}
FFmpeg and S3
First I tried encoding the HLS streams using FFmpeg, and uploading them to S3. There are multiple ways of serving content from S3, I used a Bunny.net pull zone.
I found a nice guide, and script on Peer5. That would have taken me a long time to figure out, so thanks Peer5 🙂
The script didn’t create thumbnails, so I added it:
thumbnail="1280x720"
# make thumb(s)
thumb_w="$(echo ${thumbnail} | cut -d 'x' -f 1)"
thumb_h="$(echo ${thumbnail} | cut -d 'x' -f 2)"
ffmpeg -i ${source} -vf "fps=1/10,scale=w=${thumb_w}:h=${thumb_h}:force_original_aspect_ratio=decrease" ${target}/thumbnail_%03d.jpg -y
Next I made a shortcode to insert the video object:
{{- $src := .Get "src" -}}
{{- $thumb := default 1 (.Get "thumb") -}}
{{- $title := default $src (.Get "title") -}}
{{- $description := (.Get "description") -}}
{{- $video := index $.Site.Data.videos $src -}}
{{- $poster := printf "%s%s/thumbnail_%03d.jpg" $.Site.Params.Video.thumbBaseUrl $src $thumb -}}
{{- $playlist := printf "%s%s/playlist.m3u8" $.Site.Params.Video.fileBaseUrl $src -}}
{{- with $video -}}
<video class="video-js vjs-16-9" controls preload="none" width="{{.width}}" height="{{.height}}" poster="{{$poster}}" data-setup="{}">
<source type="application/x-mpegURL" src="{{ $playlist }}">
</video>
{{- with $description -}}
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "VideoObject",
"name": {{ $title }},
"description": {{- $description -}},
"thumbnailUrl": [ {{ $poster }} ],
"uploadDate": {{ $video.modified }},
"duration": {{ $video.duration }},
"contentUrl": {{ $playlist }}
}
</script>
{{- end -}}
{{- else -}}
{{ errorf "Missing data value for video: %s" $src }}
{{- end -}}
It also pulls some video information from data/videos.yml
:
rpi-traffic-lights/02_battery-indicator-test.mp4:
width: 1920
height: 1080
bitrate: 15.05
framerate: 30.00
duration: PT00M42S
aspect_ratio: 16:9
modified: 2021-03-10
To build this data file I made a bash script that iterated all master video files and fetched data using MediaInfo:
#!/bin/bash
masters=`find _master -type f`
for master in $masters; do
echo $(echo $master | sed 's/_master\///'):
videoInfo=`mediainfo --Inform="Video;%Width%,%Height%,%BitRate%,%FrameRate%,%Duration%,%DisplayAspectRatio/String%" $master`
width=`echo $videoInfo | cut -d , -f 1`
echo " width: $width"
height=`echo $videoInfo | cut -d , -f 2`
echo " height: $height"
bitrate=`echo $videoInfo | cut -d , -f 3`
bitrate=`echo $bitrate / 1000 / 1000 | bc -l`
bitrate=`printf %.2f $bitrate`
echo " bitrate: $bitrate"
framerate=`echo $videoInfo | cut -d , -f 4`
framerate=`printf %.2f $framerate`
echo " framerate: $framerate"
duration=`echo $videoInfo | cut -d , -f 5`
duration=`echo $duration/1000 | bc`
duration=`date -d@$duration -u +PT%MM%SS`
echo " duration: $duration"
aspect_ratio=`echo $videoInfo | cut -d , -f 6`
echo " aspect_ratio: $aspect_ratio"
modified=`stat -c %y $master | cut -d " " -f 1`
echo " modified: $modified"
done
It worked, but I wasn’t confident about the encoding script… And I didn’t want to spend too much time figuring out best practises for encoding HLS for the web.
Bunny Stream
Next I tried Bunny Stream.
Bunny Stream solves the hassle of video delivery by packing transcoding, storage, security & a video player into a simple, but powerful package.
It’s a nice solution, where you just upload the video and they take care of the rest.
So I had to rewrite the shortcode:
{{- $id := .Get "id" -}}
{{- $title := .Get "title" -}}
{{- $description := .Get "description" -}}
{{- $video := getJSON "http://bunny-api-gateway:8080" (printf "/videos/%s" $id) -}}
{{- $poster := printf "%s%s/%s" $.Site.Params.Video.thumbBaseUrl $id $video.thumbnailFileName -}}
{{- $playlist := printf "%s%s/playlist.m3u8" $.Site.Params.Video.fileBaseUrl $id -}}
{{- $fallback := printf "%s%s/play_720p.mp4" $.Site.Params.Video.fileBaseUrl $id -}}
{{- with $video -}}
<video class="video-js vjs-16-9" controls preload="none" width="{{.width}}" height="{{.height}}" poster="{{$poster}}" data-setup="{}">
<source src="{{ $playlist }}" type="application/x-mpegURL">
<source src="{{ $fallback }}" type="video/mp4">
</video>
{{- with $description -}}
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "VideoObject",
"name": {{ $title | default $video.title }},
"description": {{ $description }},
"thumbnailUrl": [ {{ $poster }} ],
"uploadDate": {{ $video.dateUploaded }},
"duration": {{ $video.custom.duration }},
"contentUrl": {{ $playlist }}
}
</script>
{{- end -}}
{{- else -}}
{{ errorf "Missing data for video: %s" $id }}
{{- end -}}
Instead of the data file videos.yml
I got the video information from the Bunny Stream API. But the API requires authentication, and Hugo getJSON
doesn’t support that. So I made a simple “API-gateway” in Python:
from flask import Flask, jsonify
import requests
import time
app = Flask(__name__)
class InvalidUsage(Exception):
status_code = 400
def __init__(self, message, status_code=None, payload=None):
Exception.__init__(self)
self.message = message
if status_code is not None:
self.status_code = status_code
self.payload = payload
def to_dict(self):
rv = dict(self.payload or ())
rv['message'] = self.message
return rv
@app.errorhandler(InvalidUsage)
def handle_invalid_usage(error):
response = jsonify(error.to_dict())
response.status_code = error.status_code
return response
@app.errorhandler(404)
def resource_not_found(e):
return jsonify(error=str(e)), 404
def get_video(id):
url = 'http://video.bunnycdn.com/library/xxxx/videos/' + id
headers = {'AccessKey': 'xxxxxxxx-xxxx-xxxx-xxxxxxxxxxxx-xxxx-xxxx'}
r = requests.get(url, headers=headers)
json = r.json()
json['custom'] = {
'duration': time.strftime('PT%HH%MM%SS', time.gmtime(json['length']))
}
return json
@app.route('/videos/<videoid>')
def return_video_data(videoid):
try:
return get_video(videoid)
except:
raise InvalidUsage('Something bad happened', status_code=400)
It simply forwards queries for /videos/
to the Bunny Stream API, with added authentication. I did add a custom duration field — where the length in seconds, from the API, was converted to ISO 8601 duration.
This also worked well, but I wanted a more hands on solution — so I went searching for another solution again 🙂
Coconut.co and S3
I really liked my first solution — where I had access to, and control over, the video files. That makes a really portable solution, where it’s very easy to switch hosting service. Or even re-encode the videos, if I ever want to do that.
But I didn’t like having to encode the videos myself… I have no intention of knowing best practices for encoding video for web delivery.
So instead I opted to use an encoding service: Coconut.co
The simplest video encoding service and API since 2006. With the power of aws.
Nice! 😎
I like their API and easy to use libraries. So I made a simple python script to create encoding jobs:
import coconut
import sys
import uuid
import hashlib
import json
import datetime
import urllib.parse
uuid = str(uuid.uuid4())
coconut.api_key = 'k-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
coconut.notification = {
'type': 'http',
'url': 'https://app.coconut.co/tools/webhooks/xxxxxxxx/xxxxxxxxx'
}
coconut.storage = {
'service': 's3',
'bucket': 'xxxxxxxxxxxxxx',
'region': 'xxxxxxxxxxxx',
'credentials': {
'access_key_id': 'xxxxxxxxxxxxxxxxxxxx',
'secret_access_key': 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
},
'path': '/video/{}'.format(uuid),
'acl': 'private',
'cache_control': urllib.parse.quote('public, s-maxage=31536000, max-age=7776000')
}
master_video = sys.argv[1]
sha256_hash = hashlib.sha256()
with open(master_video,"rb") as f:
# Read and update hash string value in blocks of 4K
for byte_block in iter(lambda: f.read(4096),b""):
sha256_hash.update(byte_block)
file_hash = sha256_hash.hexdigest()
videos = []
with open('videos.json') as json_file:
videos = json.load(json_file)
for v in videos:
if v['sha256'] == file_hash:
print('Found: ' + v['path'])
print('Exiting...')
sys.exit()
videos.append({
'path': master_video,
'sha256': file_hash,
'uuid': uuid,
'datetime': datetime.datetime.now().isoformat()
})
with open('videos.json', 'w') as outfile:
json.dump(videos, outfile, indent=4)
job = coconut.Job.create(
{
"input": {
"service": "s3",
"credentials": {
"access_key_id": "xxxxxxxxxxxxxxxxxxxx",
"secret_access_key": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
},
"bucket": "cavelab-static",
"key": master_video.replace('video/', '/video_master/'),
"region": "eu-central-1"
},
'outputs': {
'jpg:720p': {
'path': '/thumbnail_%.2d.jpg',
"number": 10
},
'mp4:720p': {
'path': '/play_720p.mp4'
},
'httpstream': {
'hls': {
'path': '/hls/'
}
}
}
}
)
print(uuid)
print(job)
The file to be encoded is provided as an argument:
$ python3 create.py video/homelab/file-server/blinkenlights.mp4
It assumes that the video file is available under the /video_master
folder on the input S3 bucket.
Here is what the script does:
- Read the SHA256 checksum of the master video file
- Checks if this already exists in
videos.json
, if it does; this video has already been encoded and the script ends
- Checks if this already exists in
- Adds the video to
videos.json
:- File path
- SHA256 checksum
- Video ID, a generated UUID
- Date and time
- Kicks off an encoding job with Coconut, which uploads to a S3 bucket:
- 10 thumbnails in 720p
- MP4 fallback file in 720p
- HLS video stream
Sample entry in videos.json
:
{
"path": "video/homelab/file-server/blinkenlights.mp4",
"sha256": "06864337eda17fc97bad715a636cba490687d45c14addf2f4102ae3e2cc5bb69",
"uuid": "e5cccbee-4f39-4504-ae39-b6c134a24bac",
"datetime": "2021-05-03T20:24:36.824282"
}
And the shortcode:
{{- $id := .Get "id" -}}
{{- $thumb := default 5 (.Get "thumb") -}}
{{- $poster := printf "%s%s/thumbnail_%02d.jpg" $.Site.Params.Video.thumbBaseUrl $id $thumb -}}
{{- $playlist := printf "%s%s/hls/master.m3u8" $.Site.Params.Video.fileBaseUrl $id -}}
{{- $fallback := printf "%s%s/play_720p.mp4" $.Site.Params.Video.fileBaseUrl $id -}}
<video class="video-js vjs-16-9" controls preload="none" width="1920" height="1080" poster="{{$poster}}" data-setup="{}" crossorigin="anonymous">
<source src="{{ $playlist }}" type="application/x-mpegURL">
<source src="{{ $fallback }}" type="video/mp4">
</video>
I didn’t bother with the VideoObject structured data this time, I might add it in the future.
Currently I am serving the videos using a Bunny.net pull zone. But having the files in S3 means I am free to serve them with whatever service I choose 🙂
Using the shortcode
{{< video id="e5cccbee-4f39-4504-ae39-b6c134a24bac" >}}
Now I just need to migrate all my YouTube embeds. I’ll get to that — some day 😉
Last commit 2024-11-11, with message: Add lots of tags to posts.