Happy Memorial Day weekend, all! Hope you’re enjoying the respite. While you’re recovering, here are some stories from the past week in gaming, machine learning, and audio which may be interesting to hear about!
Over $10B spent in April in the gaming sector, with streaming doubling in size compared to this time last year. I’ve acknowledged before that online games aren’t quite there yet in terms of truly replacing in-person socialization, but I don’t think the numbers could be clearer that even with those limitations, gaming and streaming are already incredibly significant pieces of how we interact with each other, especially during these difficult times.
For an app with less than 5,000 beta users, the $100m valuation isn’t a small thing, and really speaks to the expectations of investors of the potential of this platform. Just one more data point for the importance of voice in the growing space of online social interaction, and I’ll be very excited to see how their service expands with the new capital!
Ubisoft minced no words in declaring that Area F2 was a copy of their popular game Rainbow Six Siege in every way. Since the developers of Area F2 (a company called Qooka, which is on a subsidiary chain that ultimately leads back to Alibaba) are based in China, Ubisoft seems to have decided it would be more effective to cut off their distribution through Apple and Google’s mobile stores than it would be to go after the game developer themselves. It’s currently unclear whether Ubisoft won this suit, or if the pressure was simply great enough that Qooka was forced to give in, but either way, Qooka recently posted an announcement that Area F2 would be going away for a while as they “work to deliver a better experience to players.” The gaming industry has a long history of taking inspiration from others in the space, but there’s also a general understanding that too close of a copy is unethical, justifying Ubisoft’s displeasure here. It will be interesting to see how the line between “inspired by” and “ripoff of” evolves as elements of game design become even more repeatable through better and better tooling.
One of the things that makes deep learning research difficult is the long iteration times where your model is training before you can learn if your idea is any good. Any work to speed out that process will almost certainly produce outsized effects in the speed of ML research, which makes this a great thing to focus on. That said, we’re still a ways away from speeds that truly democratize this kind of work – Microsoft’s blog post references training test models on 1,024 Nvidia V100 GPUs (which cost a few thousand dollars each), so the average researcher or startup may not be able to make the most of these innovations just yet.
That’s all for this week! As always, please let me know if you find these posts useful (or not!), as any feedback is always welcome!