• 0 Posts
  • 7 Comments
Joined 1 year ago
cake
Cake day: July 8th, 2023

help-circle
  • As a 40 something man, I’ve found that my friend groups tend to shift by life stage more than age.

    We have friends that are 10-15 years older than us because our kids are the same age, and we have friends that are 10-15 years younger than us because we have overlapping hobbies or work together.

    At this point in my life, I don’t even bother finding out someone’s age until I’d consider them friends, because it doesn’t matter if we’ve found something we connect over.




  • I think this supports his argument. Having to research desktop environments to decide which is optimized for the potential problems a new user may face, then finding a distro that packages that DE is quite frankly too much for the average user.

    I’d argue between 3% and 5% of PC users are willing to research and experiment to find the flavor of Linux that truly works for them.

    Linux has come a long way, I still remember using Gentoo as a daily driver and seeing Linux cross 1% of desktop share, but the average desktop user doesn’t know the difference between a kernel and a colonel, and they don’t want to.




  • Lots of boring applications that are beneficial in focused use cases.

    Computer vision is great for optical character recognition, think scanning documents to digitize them, depositing checks from your phone, etc. Also some good computer vision use cases for scanning plants to see what they are, facial recognition for labeling the photos in your phone etc…

    Also some decent opportunities in medical research with protein analysis for development of medicine, and (again) computer vision to detect cancerous cells, read X-rays and MRIs.

    Today all the hype is about generative AI with content creation which is enabled with Transformer technology, but it’s basically just version 2 (or maybe more) of Recurrent Neural Networks, or RNNs. Back in 2015 I remember this essay, The Unreasonable Effectiveness of RNNs being just as novel and exciting as ChatGPT.

    We’re still burdened with this comment from the first paragraph, though.

    Within a few dozen minutes of training my first baby model (with rather arbitrarily-chosen hyperparameters) started to generate very nice looking descriptions of images that were on the edge of making sense.

    This will likely be a very difficult chasm to cross, because there is a lot more to human knowledge than thinking of the next letter in a word or the next word in a sentence. We have knowledge domains where, as an individual we may be brilliant, and others where we may be ignorant. Generative AI is trying to become a genius in all areas at once, and finds itself borrowing “knowledge” from Shakespearean literature to answer questions about modern philosophy because the order of the words in the sentences is roughly similar given a noun it used 200 words ago.

    Enter Tiny Language Models. Using the technology from large language models, but hyper focused to write children’s stories appears to have progress with specialization, and could allow generative AI to stay focused and stop sounding incoherent when the details matter.

    This is relatively full circle in my opinion, RNNs were designed to solve one problem well, then they unexpectedly generalized well, and the hunt was on for the premier generalized model. That hunt advanced the technology by enormous amounts, and now that technology is being used in Tiny Models, which is again looking to solve specific use cases extraordinarily well.

    Still very TBD to see what use cases can be identified that add value, but recent advancements to seem ripe to transition gen AI from a novelty to something truly game changing.