Why Userscripts
Free, Fast, and Freaky Powerful
Free
The challenge with most other methods, such as serverless etc. is they all have to run on a server somewhere (or many servers). The cost of running a model on a remote server varies, but especially for high volume cases and video, this is a very non-trivial cost. Further, if you want to have many models ready "hot" this cost can skyrocket further.
The default for userscripts is to run locally. such as on each annotators computer. This means there is no additional cost to run. You can still make remote calls (and even use serverless approaches if you want).
Freaky Powerful
A challenge with any type of remote method is that it's often hard to customize. What if you want to run a few models and combine their output? Or do post-processing with a library like open-cv? Or? Further, most of the implementations are concrete. eg a specific method of x algorithm is implemented. At the rapid change of the ML industry this is very limiting. Remote contexts are always hard to debug and formatting things like trying to b64 encode an image can end up draining hours of dev time.
With Userscripts you are in complete control. Run any model. Run the latest version. Run your own versions. You can add in your secret sauce, adjust parameters to be relevant to your needs, and so much more. Plus with userscripts, it's really fast to get feedback. It's basically as fast as coding normally. No need to wait for the new remote model to spin up, remote code to load, "deploy" processes etc. All the hard parts about loading media, postprocessing outputs etc. are handled for you.
Fast
Another challenge with remote focused models is no matter how optimized they are there is always that network delay. This makes most user interfaces painfully slow. There's always a waiting or loading icon.
Becuase userscripts run locally (on your annotators local machine) by default, they are super fast. For example, our example implementation of a full image object detector runs in 7 milliseconds and a full image segmentation model runs in ~500-700 ms. Plus it's fast to get started - you can share a userscript with a teammate and they can try it instantly.
Updated over 3 years ago