I Am Mesmerized By Our New Robotic Vacuum

https://dev.to/deciduously/i-am-mesmerized-by-our-new-robotic-vacuum-10pc

Love this commentary on the brilliance of robot vacuums. Worth reading the whole post, but here is the TL;DR.

I’m fascinated by how effective this budget-end robot vacuum’s seemingly simple set of algorithms is at cleaning our whole weird space. I want to try to model the problem and implement the movement algorithms myself to see if I can replicate that emergent behavior with totally random inputs or if it requires more tuning, and what sort of tuning if so. It’s not a type of programming I have much experience with. What would you use to explore this sort of thing?

I have some general space-filling/path-finding algorithms in my toolkit to start from, but I also know that some environments are easier than others for this sort of modeling. Some options I’ve heard of but don’t know anything about:

  • Processing - Java (I don’t really know Java, but am studying C++? Similar-ish?)
  • The python turtle - Python. I don’t know anything about turtle and very little about Python but it sounds like it can be used to explore this problem space. Python seems like a good choice for this but I don’t know the ecosystem at all.
  • I guess the HTML canvas element but that sounds unwieldy and complicated. Is there a framework you recommend?
  • I know Rust has some geospatial crates and image/graph crates for visualization, but nothing that I know of integrated for ease of use, experimentation and one-offs.
  • Game engines? I’ve never used any, is that the right genre of tool for this? Love2d for LUA looks like it might be a good choice but it also might be overkill.

…that’s it, really. Is there anything else I should be aware of?

Also, if you know more about these robot vacuums than I do and can enlighten me/us, please do!