• 0 Posts
  • 117 Comments
Joined 2 years ago
cake
Cake day: June 6th, 2023

help-circle
  • Agree that other parts of the EM spectrum could enhance the ability of MV to recognize things. Appreciate the insights – maybe I will be able to use this when I get back to tinkering with MV as a hobbyist.

    Of course identifying one object is one level. For a general purpose replacement for humans ability, since that’s what the thread is focused (ahem) on, it has to identify tens of thousands of objects.

    I need to rethink my opinion a bit. Not only how far general object recognition is but also how one can “cheat” to enable robotic automation.

    Tasks that are more limited in scope and variability would be a lot less demanding. For a silly example, let’s say we want to automate replacing fuses in cars. We limit it to cars with fuse boxes in the engine bay and we can mark the fuse box with a visual tag the robot can detect. The layout of the fuses per vehicle model could be stored. The code on the fuse box identifies the model. The robot then used actuators to remove the cover and orients itself to the box using more markers and the rest is basically pick and place technology. That’s a smaller and easier problem to solve than “fix anything possibly wrong with a car”. A similar deal could be done for oil changes.

    For general purpose MV object detection, I would have to go check but my guess is that what is possible with state of the art MV is identifying a dozen or maybe even hundreds of objects so I suppose one could do quite a bit with that to automate some jobs. MV is not to my knowledge at a level of general purpose replacement for humans. Yet. Maybe it won’t take that much longer.

    In ~15 years in the hobbyist space we’ve gone from recognizing anything of a specified color under some lighting conditions to identifying several specific objects. And without a ton of processing power either. It’s pretty damn impressive progress, really. We have security cameras that can identify animals, people, and delivery boxes. I am probably selling short what MV will be able to do in 15 more years.





  • Damn… nice work on the research! I will read through these as I get time. I genuinely didn’t think there would be much for manual labor stuff. I’m particularly interested in the plumber analysis.

    I think augmentation makes a lot of sense for jobs where a human body is needed and it will be interesting to see how/if trade skill requirements change.

    I’ll edit this as I read…

    Plumbing. The article makes the point that it isn’t all or nothing. That as automation increases productivity, fewer workers are needed. Ok, sure, good point.

    Robot plumber? A humanoid robot? Not very likely until enormous breakthroughs are made in machine vision (I can go into more detail…), battery power density, sensor density, etc. The places and situations vary far too greatly.

    Rather than an Asimov-style robot, a more feasible yet productivity enhancing solution is automated pipe cutting and other tasks. For example, you go take your phone and measure the pipe as described in the link. Now press a button, walk out to your truck by which time the pipe cutter has already cut off the size you need saving you several minutes. That savings probably means you can do more jobs per day. Cool.

    Edit 2

    Oil rig worker. Interesting and expected use of AI to improve various aspects of the drilling process. What I had in mind was more like the people that actually do the manual labor.

    Autonomous drones, for example, can be used to perform inspections without exposing workers to dangerous situations. In doing so, they can be equipped with sensors that send images and data to operators in real time to enable quick decisions and effective actions for maintenance and repair.

    Now that’s pretty cool and will probably reduce demand for those performing inspections (some of whom will have to be at the other end receiving and analyzing data from the robot until such time as AI can do that too.

    Autonomous robots, on the other hand, can perform maintenance tasks while making targeted repairs to machinery and equipment.

    Again, technologies required to make this happen aren’t there yet. Machine vision (MV) alone is way too far from being general purpose. You can decide a MV system that can, say, detect a coke can and maybe a few other objects under controlled conditions.

    But that’s the gotcha.Change the intensity of lighting, change the color temperature or hue of the lighting and the MV probably won’t work. It might also mistake diet coke can or a similar sized cylinder for a Pepsi can. If you want it to recognize any aluminum beverage can that might be tough. Meanwhile any child can easily identify a can in any number of conditions.

    Now imagine a diesel engine generator, let’s say. Just getting a robot to change the oil would be nice. But it has to either be limited to a specific model of engine or be able to recognize where the oil drain plug and fill spot is for various engines it might encounter.

    What if the engine is a different color? Or dirty instead of clean? Or it’s night, or noon (harsh shadows), overcast (soft shadows), or sunset (everything is yellow orange tinted)? I suppose it could be trained for a specific rig and a specific time of day but that means set up time costs a lot. It might be smarter to build some automated devices on the engine like a valve on the oil pan. And a device to pump new oil in from a vat or standard container or whatever. That would be much easier. Maybe they already do this, idk.

    Anyway… progress is being made in MV and we will make far more. That still leaves the question of an autonomous robot of some kind able to remove and reinstall a drain plug. It’s easy for us but you’d be surprised at how hard that would be for a robot.







  • I don’t disagree with most of what you said. I think so far the following jobs are safe from direct AI replacement, because it is much harder to replace manual laborers.

    • Oil rig worker
    • Plumber
    • Construction worker
    • Landscaper/gardener
    • Telephone repair tech
    • Mechanic
    • Firefighter
    • Surveyor
    • Wildlife management officer
    • Police

    What companies won’t realize until too late is that paying customers need jobs to pay for things. If AI causes unemployment to rise to some ungodly high, paying customers will become rare and companies will collapse in droves.


  • Appreciate the detailed response!

    Indeed, intelligence is …a difficult thing to define. It’s also a fascinating area to ponder. The reason I asked was to get an idea of where your head is at with the claims you made.

    Now, I admit I haven’t done a lot with gpt-4 but your comments make me think it is worth the time to do so.

    So you indicate gpt-4 can reason. My understanding is gpt-4 is an LLM, basically a large scale Markov chain, trained to respond with appropriate output based on input (questions).

    On the one hand, my initial reaction is: no, it doesn’t reason it just mimics or simulates human reasoning that came before it in text form.

    On the other hand, if a program could perfectly simulate whatever processes are involved in reasoning by a human to the point that they’re indistinguishable, is it not, in effect, reasoning? (I suppose this amounts to a sort of Turing Test but for reasoning exercises).

    I don’t know how gpt4 LLMs work yet. I imagine, being a Markov Model (specifically a Markov Chain), if the model is trained on human language then the underlying semantics are sort of implicitly captured in the statistical model. Like, simplistically, if many sentences reflect human knowledge that cars are vehicles and not animals then it’s statistically unlikely for anyone to write about attributes and actions of animals when talking about cars. I assume the LLM is of such a scale that it permits this apparently emergent behavior.

    I am skeptical about judgement calls. I would think some sensory input would be required. I guess we have to outline various types of judgement calls to really dig into this.

    I am willing to accept that gpt-4 simulates the portions of the brain that deal with semantics and syntax both the receiving and transmitting abilities. And, maybe to some degree, knowledge and understanding.

    I think “very similar to a complete brain” is an overstatement as the brain also does some amazing things with vision, hearing, proprioception, touch, among other things. Human brains can analyze situations and take initiative, analyze things and understand how they work and apply that to their repair, improvement, duplication, etc. We can understand and solve problems, and so on. In other words I don’t think you’re giving the brain anywhere near enough credit. We aren’t just Q&A machines.

    We also have to be careful of the human tendency to anthropomorphize.

    I’m curious to look into vector databases and their applications here. Addition of what amounts to memory, or like extended context, sounds extremely interesting.

    Interesting to ponder what the world would be like with AGI taking over the jobs of most knowledge workers, artists, and so on. (I wonder if someone could create a CEO replacement…)

    What does it mean for a capitalist society with masses of people permanently unemployed? How does the economy work when nobody can afford to buy anything because they’re unemployed? Does this create widespread poverty and collapse or a post-scarcity economy in some sectors?

    Until robots mechanically evolve to Asimov’s vision, at least, manual labor is safe. Truly being able to replace a human body with a robot is still a ways off due to lack of progress on several fronts.








  • USGS research geologist Jeff Pigati and his colleagues (including Bennett and other co-authors of the 2021 paper) recently radiocarbon-dated conifer pollen—mostly from fir, spruce, and pine—from the same ancient ground surface as the tracks and the ditchgrass seeds. They also used another type of dating, called optically stimulated luminescence (a type of dating that measures when a grain of quartz was last exposed to sunlight) on sediment samples from between the oldest two layers of tracks. The results lined up very well with Bennett and his colleagues’ original radiocarbon dates; the tracks couldn’t be any younger than about 21,500 years old.

    I had never heard of this other method with the quartz. Interesting.

    Ditchgrass, as its name suggests, is an aquatic plant, exactly the kind of thing you’d expect to find along the shore of a lake. But aquatic plants tend to soak up groundwater, which can contain older carbon than the rain that waters more landlubberly plants. Seeds from aquatic plants like ditchgrass can (but don’t always) look older than they really are when radiocarbon-dated—sort of like the radiocarbon version of carrying a fake ID.

    The other two methods don’t suffer such problems. So now that they have similar results from these methods the evidence is much stronger.