This is not a forum on how to pirate
This is not a forum on how to pirate
Simpler ways to do that…
Except AMD XDNA is a straight up FPGA, and Intel XEco is as well.
For someone who claims to work in this industry, you sure have no idea what’s going on.
Supporting a product does this is one thing.
Making it work is another
One is easier than the other.
You don’t design CPUs for a living unless you’re talking about the manufacturing process, or maybe you’re just bad at it and work for Intel. Your understanding of how FPGA works is super flawed, and your boner for GPUs is awkward. Let me explain some things as someone who actually works in this industry.
Matrix math is just stupid for whatever you pipe through it. It does the input, and gives an output.
That is exactly what all these “NPU” co processing cores are about from AMD, Intel, and to a further subset Amazon and Google on whatever they’re calling their chips now. They are all about an input and output for math operations as fast as possible.
In my own work, these little AMD XDNA chips pop out multiple segmented channels way better than GPUs when gated for single purpose. Image inference, audio, logic, you name it. And then, SHOCKER!, if I try and move this to a cloud instance, I can reprogram the chip on the fly to swap from one workload to another in 5ms. It’s not just a single purpose math shoveling instance anymore, it’s doing articulations on audio clips, or if the worker wants, doing ML transactions for data correlation. This costs almost 75% less than provisioning stock sets of any instances to do the same workload.
You have no idea what you’re talking about.
Sweeping and dusting is one thing. Cooking is just fuckin stupid though.
If you’re unfamiliar with FPGA, you may want to read up a bit, but essentially, a generic platform that is reprogrammed between iterations of doing something more efficiently than a generic instruction set. You tell it what to do, and it does it.
This is more efficient than x86, ARM, or RISC because you’re setting the boundaries and capabilities, not the other way around.
Your understanding of GPUs is wrong though. What people run now is BECAUSE of GPUs being available and able to run those workloads. Not even well, just quickly. Having an FPGA set for YOUR specific work is drastically more efficient, and potentially faster depending on what you’re doing. Obviously for certain things, it’s a circle peg in a square hole, but you have to develop for what is going to work for your own specific use-case.
Most people wouldn’t be able to even afford these things anyway. Don’t worry about it.
Nope. Just taxis, so you don’t own the thing.
Two reasons why this is just another bullshit claim:
Generalized robotics don’t have any autonomy yet. They require immense amount of power to be mobile, and charging takes a lot of time. You’d need fleets to replace fleets upon fleets. Maybe 20m of runtime, and then the same for charging.
Everything needs to be trained for job-specific tasks. Repetitive work that does a single purpose is way easier than a robot with multiple jobs. Right now all these tech demos are simplistic at best, and only focus on single jobs.
Tesla’s robot is a total scam, akin to a child’s toy that reacts to certain things, and requires internet connectivity (wonder why???).
Boston Dynamics isn’t even trying this noise, they know what their purpose is…military use.
Agility hasn’t even demonstrated autonomy yet.
1X is maybe the closest, but again…single purpose.
Honda is basically off the map right now, but actually have the most advanced articulation platform.
It’s a mess. Stop worrying about this shit and ignore the headlines for 5 years maybe.
No kidding. 🙀
The premise is to enable local-only storage. It’s more secure. Joplin has backends for storing your contents securely elsewhere, and has transaction control, which is the same difference as a WebUI.
Yes. That’s why everyone is scrambling to create new interoperable model languages and frameworks that work on more efficient hardware.
Almost everything that is productized right now stems from work in the Python world from years ago. It got a swift uptake with Nvidia making it easier to use their hardware on compiled models, but now everyone wants more efficient options.
FPGA presents a huge upside to not being locked into a specific vendor, so some people are going that route. Others are just making their frameworks more modular to support the numerous TPU/NPU processors that everyone and their brother needlessly keeps building into things.
Something will come out of all of this, but right now the community shift is to do things without needing so much goddamn power draw. More efficient modeling will come as well, but that’s less important since everything is compiled down to something that is supported by the devices themselves. At the end of the day, this is all compilation and logic, and we just need to do it MUCH leaner and faster than the current ecosystem is creeping towards. It’s not only detrimental to the environment, it’s also not as profitable. Hopefully the latter makes OpenAI and Microsoft get their shit together instead of building more power plants.
Joplin is a better way to go.
You wake up thinking you’re going to have a good day, and then you read about these fucking people…
AiBot post. Fuck this shit.
You’re just describing a dozen different things that fit this mold, so let me throw some out there and you can decide what does what you want:
These all do what you want if you’re taking the steps to automate pointing to them from whatever your destination endpoint might be. So then you’re basically NOT using a VPN, and only a proxy.
Honestly, I’d just install OpenWRT on the Pi and try out different plugins to find what does what you want. You can honestly simplify this all by using Dynamic DNS in the first place to just have a predictable hostname.
So had everywhere else. What are you on about?
Why all the hoops in your post then? Just stream it.