TL;DR
patchbay.pub is a free web service you can use to implement things like static site hosting, file sharing, cross-platform notifications, webhooks handling, smart home event routing, IoT reporting, job queues, chat systems, bots, etc, all completely serverless and requiring no account creation or authentication. Most implementations need nothing but curl and simple bash snippets.
Why did you make this?
I originally wanted an easy way to get a notification on my laptop when a long-running job on my server completed. After a bit of experimenting I decided a small amount of additional features would result in a more generally useful tool. This evolved into the following question:
"What is the 20% of IFTTT functionality I could implement to have 80% of IFTTT features that I would personally use?"
patchbay is what I ended up with.
The entire philosophy of patchbay is that all the logic is done on your local machine[s], typically with small shell snippets. The server exists only for you to connect ("patch") different components together.
How is it implemented?
patchbay provides an infinite number of virtual HTTP "channels" anyone can use. Each channel is represented by a URL. Here's an example channel:
https://patchbay.pub/aa7cc811-d21c-42ef-92cc-a5566fa38344
Generally, a channel is chosen by randomly generating a string long enough that no one else will guess it (all channels are publicly accessible; see Security), and then sending HTTP requests to it. The channel above was generated by your browser when you loaded this page, and should be fine to use for running the examples. Channels don't have to be explicitly created before use. Channels can operate in one or both of two modes. By default, a channel models a multi-producer, multi-consumer (MPMC) queue, where GET requests add a consumer, and POSTs add a producer. Consumers will block if there aren't any producers, and producers will block if there aren't any consumers. As soon as a producer gets matched with a consumer, anything in the producer's HTTP body is streamed over the channel to the consumer.
Enough theory; let's try out a trivial example. If you run this GET to create a consumer, it will block:
curl https://patchbay.pub/aa7cc811-d21c-42ef-92cc-a5566fa38344
Until you also run this POST in another terminal to create a producer:
curl https://patchbay.pub/aa7cc811-d21c-42ef-92cc-a5566fa38344 -d "Hi there"
You can also try reversing the order, and observe that the producer blocks until you run the consumer. If you start 2 producers at the same time, you'll have to run the consumer twice in order to unblock both of them, one after the other.
If you use the /pubsub/
protocol,
like this:
curl https://patchbay.pub/pubsub/aa7cc811-d21c-42ef-92cc-a5566fa38344
curl https://patchbay.pub/pubsub/aa7cc811-d21c-42ef-92cc-a5566fa38344 -d "Hi there"
the request uses the channel in a different mode. GETs act similarly to before, but POSTs become non-blocking, and will broadcast messages to all blocked pubsub consumers, not just the first one. As the name suggest, this models the PubSub pattern.
So, with that brief introduction, here are a few examples of things you can implement with MPMC queues and pubsub messages over HTTP:
Poor man's desktop notifications
Here's how my original goal can be implemented using patchbay.pub. First, on my remote server:
./longjob.sh; curl -X POST https://patchbay.pub/pubsub/aa7cc811-d21c-42ef-92cc-a5566fa38344
And on my Linux laptop with desktop notifications support:
curl https://patchbay.pub/pubsub/aa7cc811-d21c-42ef-92cc-a5566fa38344; notify-send "Job done"
That's it. I'll get a popup on my screen when the job is done. (Note that MacOS has notification functionality built-in.) If I want to get real fancy I can re-run the consumer in a loop. I keep the following script running in the background ready to receive notifications from whatever producers I want, displaying the HTTP body from the producer:
#!/bin/bash
while true
do
# Run curl in a subshell and pass the results to notify-send
notify-send $(curl https://patchbay.pub/pubsub/aa7cc811-d21c-42ef-92cc-a5566fa38344)
# Avoid too many requests if server is down or rejecting
if [ "$?" -ne "0" ]; then
sleep 1
fi
done
It's also possible to use a GET request to create a producer. This is
done by using the method
and body
query
parameters, like this:
curl "https://patchbay.pub/pubsub/aa7cc811-d21c-42ef-92cc-a5566fa38344?method=post&body=Job%20Done"
This is useful if you want to do something like send a signal from a mobile browser, where you only have access to the address bar. I like to create mobile browser shortcuts for triggering different things.
Poor man's SMS notifications
Let's extend the previous example a bit. What if I want to go to lunch, but still get notified when the job on my server is done? The server is already broadcasting a pubsub message, so I don't need to make any changes there. I just need to add a consumer that can notify me on the go. How about using Twilio to send myself a text message? First I followed the instructions to get the Twilio CLI installed and logged in on my laptop, then it's just a matter of calling it with the body received by the pubsub consumer:
twilio api:core:messages:create --from "+15017122661" --to "+15558675310" --body $(curl https://patchbay.pub/pubsub/aa7cc811-d21c-42ef-92cc-a5566fa38344)
Now I'll get a desktop notification, and a text message when the job is done. If you don't want to pay for texts, you can do something similar with any messaging app that offers an API.
Poor man's webhooks
Receiving a text notification is useful, but what if I want to send a text to my Twilio number and have it trigger some other event? This is easily done by logging in to the Twilio website and pointing the SMS webhook tohttps://patchbay.pub/pubsub/aa7cc811-d21c-42ef-92cc-a5566fa38344
.
Any texts to my number will now trigger a pubsub event on the same
channel as before. I can use whatever command line tools I want to
process the webhook.
Poor man's IRC
How about an ad-hoc chat app? This chat includes everyone currently visiting this page, using the https://patchbay.pub/pubchat channel. It doesn't require any fancy WebRTC or WebSockets; just HTTP. It's implemented using the server-sent events (SSE) protocol. Instead of the events being generated on the server, they originate from peer producers and are broadcast to all consumers. If there's no one else around, try opening it in another tab and talking to yourself.
Open chat in tabThe code is quite simple, and most of it is just UI stuff. Here's the meat of it:
// Send a message:
const message = JSON.stringify({
author: "Default Nickname",
text: "Hi there",
});
fetch('https://patchbay.pub/pubsub/pubchat', {
method: 'POST',
body: `data: ${message}\n\n`,
});
// Receive messages:
const evtSource = new EventSource("https://patchbay.pub/pubsub/pubchat?mime=text%2Fevent-stream&persist=true");
evtSource.onmessage = function(event) {
console.log(event.data);
}
There are a couple new pieces of the patchbay API shown here. First of all, notice that I'm overriding the Content-Type returned from the server by setting mime=text/event-stream in the GET request. This is necessary to make the browser use the SSE protocol. The server will return whatever you specify. I'm also setting persist=true. This keeps the consumer connection open, rather than requiring it to loop after each message. This is useful for ensuring no messages are missed.
You can also participate with curl using something like the following:
printf 'data: {"author": "Curly Jefferson", "text": "Hi there"}\n\n' | curl https://patchbay.pub/pubsub/pubchat --data-binary @-
Note that printf is necessary in order to properly pass the newlines to curl, and I'm using --data-binary to prevent curl from stripping whitespace. Also note that this chat system is very easy for bad actors to interfere with by sending crafted messages. A real solution would probably need something more sophisticated than SSE, including some sort of client-side filtering.
Update 2022-02-21: omar2205 implemented a slick little private chat codepen you can find here.Poor man's job queue
What if you had a directory with 1000 MP3s you want to transcode without using more than 4 cores on your machine? You could google how to use GNU parallel for the 27th time (assuming it runs on your operating system), or you could run something like this in one terminal:
#!/bin/bash
# IFS determines what to split on. By default it will split on spaces. Change
# it to newlines
# See https://www.cyberciti.biz/tips/handling-filenames-with-spaces-in-bash.html
ifsbak=$IFS
IFS=$(echo -en "\n\b")
for filename in *.mp3
do
curl https://patchbay.pub/aa7cc811-d21c-42ef-92cc-a5566fa38344 -d $filename
done
# Need to restore IFS to its previous value
IFS=$ifsbak
And this in 4 others:
#!/bin/bash
while true
do
filename=$(curl -s https://patchbay.pub/aa7cc811-d21c-42ef-92cc-a5566fa38344)
if [ "$filename" != "Too Many Requests" ]
then
echo $filename
ffmpeg -i "$filename" "$filename.ogg"
else
sleep 1
fi
done
The work will be evenly distributed across the consumer workers, without using more than 4 cores at a time.
Poor man's web hosting
Time to get funky. This entire web site is being hosted over a patchbay.pub channel. How? This is essentially how I'm hosting the index.html for the page you're reading (note that it uses the root channel, /):
while true; do curl -X POST https://patchbay.pub/ --data-binary ./index.html; done
And chat.js:
while true; do curl -X POST https://patchbay.pub/apps/simple_chat/chat.js --data-binary ./apps/simple_chat/chat.js; done
That's right; I'm advocating bash as your next web server! (that's not something I ever thought I'd write). This works because static site hosting can be modeled as a MPMC queue, where each file you want to host is a producer in a loop, and web browsers create a consumer with each request to a resource. You can also assign multiple producers to each hosted file for poor man's load balancing.
There's nothing special about channel ids. Any valid URL path will do. In this case the channel ids carry extra semantic information corresponding to a HTTP resource location, but as far as the server is concerned, apps/simple_chat/chat.js is just a bunch of characters forming an id.
Astute observers will see a problem here: what's preventing any of you from sending POST requests to those channels and competing with my producers to publish your own content in place of this page? For these few endpoints, I added a bit of authentication to ensure I'm the only one who can publish to them. In general I don't expect people to need publicly facing sites on patchbay.pub. I think sites hosted on privately shared channels (a la poor man's ngrok) are much more likely. But feel free to reach out if you want a "vanity channel" for some reason.
Just because you can't host something on the root channel, doesn't mean
you can't host it somewhere else on patchbay.pub. There's a
host.sh
script in
the repo).
You can host your own copy of this site on your own channel by
cloning the repo and running this command:
./host.sh https://patchbay.pub/aa7cc811-d21c-42ef-92cc-a5566fa38344
Then point your browser here (make sure to include the trailing slash; it's required for relative browser imports to work):
https://patchbay.pub/aa7cc811-d21c-42ef-92cc-a5566fa38344/
You can now make whatever changes you want to the code, and they will immediately take effect when you refresh the browser. I actually wrote this page, which is a simple static HTML page, using this workflow. Note that you need to refresh twice, in order to flush the curl producers that are still blocked waiting for consumers of the old code.
I'm working on a more robust CLI tool, which will turn common use cases like hosting a static site with many files into a one liner (rather than having to manually write something like host.sh yourself):
patchbay-cli host https://patchbay.pub/aa7cc811-d21c-42ef-92cc-a5566fa38344 ./dir/to/host
Poor man's file sharing
No sketchy web service is complete without the ability to share files. Pubsub messages are restricted in size, because each message has to be copied into memory in order to be broadcast. MPMC streams are under no such limitation. If you POST a 10GB file and send your friend a link, it will be efficiently streamed from your machine, through a patchbay.pub server, to your friend's machine. Note that bandwidth is currently quite rate-limited on the free server, so I only recommend this for relatively small files. Otherwise you'll be waiting a while.Security
patchbay is designed for simple ad-hoc tasks, with very low friction being a primary goal. Having to juggle auth tokens and logins runs counter to these aims. As such, it probably shouldn't be used for any highly sensitive data. That said, I think it's secure enough for many uses. In general, the longer and more random your channel id is, the less likely anyone else can guess it or stumble upon it. You'll probably want to user longer ids than the one generated for these examples. Maybe a UUID or something. Also note that due to 1) rate limiting and 2) the fact that requests block by default, brute forcing probably isn't a viable attack strategy for channel ids of even moderate length.
Privacy
We don't look at anyone's data as it goes through our servers, but you can also use end-to-end encryption if you don't trust us.
Caveats
- The goal is to keep patchbay free for everyone to use for projects of a reasonable size. However, it's obviously necessary to implement request rate and bandwidth limits. If there's enough interest, I'll probably spin out a product of some sort for high-load use cases. Feel free to reach out at info@patchbay.pub if you have any thoughts.
- Due to the way TCP works, it can be tricky to detect when an HTTP client has disconnected. So if you connect a producer and then cancel it, the server might keep it around for a while before cleaning it up, and a consumer might grab the stale/partial data. The simplest way to handle these situations is to flush it manually by attaching a consumer as soon as you cancel the producer.
- In general, your code needs to assume the connection can fail at any time without having transferred any data. Additionally, even if some or all of a producer body is read from the request, that doesn't guarantee it's been delivered to a consumer. It might just be in an OS buffer. That said, the server should never close a producer connection until a consumer has attached and all data has been read from the producer.
Related Projects
One nice thing about patchbay is that the core functionality is so simple it lends itself nicely to fresh implementations and derivative ideas. Check some of these out:
- https://github.com/prologic/conduit - Open source implementation of queues/pubsub, inspired by patchbay
- https://github.com/schollz/duct - Open source implementation of queues/pubsub, inspired by patchbay
- https://github.com/VictorioBerra/patch-me - A GUI for services like patchbay.pub
- https://github.com/ripienaar/piper - Patchbay style distributed pipe using NATS.io
If you have a patchbay-related project, let us know so we can add it to the list!
Newsletter
If you'd like to stay up to date on future developments, consider subscribing to our newsletter: