

Maybe I want to move back into it… And selling has a 10% cost after realtor fees and closing fees.
Maybe I want to move back into it… And selling has a 10% cost after realtor fees and closing fees.
Based on the amount of vitriol I’ve personally received on this site for renting one property while I am temporarily relocated to attend school, the answer is yes.
For some reason everyone views being a landlord as easy money. But in reality returns on investment are worse than the stock market for being the landlord of a single family home.
Edit: Isn’t it funny how the critics below didn’t even ask questions about a specific situation where it does make sense to rent out an owned home? Instead of trying to understand why someone might make the choice they make, they sling insults and make wide sweeping assumptions to reinforce their skewed world view. Honestly it’s this shit that’s why Trump won. Leftists can’t see the forest for the trees and are willing to engage in ever escalating purity tests that only alienate other sympathetic voters to leftist causes.
I worked hard to be able to own my own house. Saved money and took out a loan. I never received a penny from my parents or some inheritance from a family member that died. A greater return on investment can absolutely be made by investing in the SP500, returns on investment for single family homes will be worse. The SP500 can be expected to rise an average of 10% per year. A single family home on the other hand will increase by 4.3% per year. With interest rates being higher than that level appreciation, there is effectively no profit from the leverage that can be typically seen by borrowing money. Renting is typically 37% cheaper than buying on a month-to-month basis. Owners don’t expect to Break-even on a home until after 5-10 years of ownership (depending on the city). Over 2/3 the cost of a mortgage go towards loan interest and taxes. Now what does a house get you then if there are all these downsides? Freedom. Freedom to decorate how you choose. To remodel, to build a deck, install Ethernet throughout the house, add an extension. But most of all, it gives long-term stability. After that 5 year period where a homeowner is taking a loss because of buying, they are finally ahead financially of a renter. This is why it doesn’t make sense to sell a home due to short-term circumstances, because owning a home is inherently a long-term benefit. Especially when one loses 10% of the the value of a home selling it when it would take 3 years for the home to even grow to the point where that cost is covered by increases in home value, which is not even remotely guaranteed, as evidenced by home values only increasing 0.12% after falling by 5% the previous year.
I don’t know why you are mentioning Starship when I made no mention of that. Starship HLS is also a dumb idea, but that’s beside the point.
SLS is horribly expensive for what it provides.
The Defense Production Act could be used to meet these ends. SpaceX is a defense contractor and exists at the privilege of the US Government for the US Government.
Coming from several people who work with SpaceX, there is a dedicated group of people that exist to distract Elon from all vital SpaceX functions.
SLS is on track to be more expensive when adjusted for inflation per moon mission than the Apollo program. It is wildly too expensive, and should be cancelled.
This coupled with the fact that the rocket is incapable of sending a manned capsule to low earth orbit which is the the lunar gateway is planned to a Rectilinear Halo Orbit instead.
Those working in the space industry know that SpaceX’s success is not because of Elon but instead Gwynne Shotwell. She is the President and CEO of SpaceX and responsible for all things SpaceX. The best outcome after the election is to remove Elon from the board and revoke his ownership of what is effectively a defense company for political interference in this election. Employees at SpaceX would be happy, the government would be happy, and the American people would be happy.
The technical definition of AI in academic settings is any system that can perform a task with relatively decent performance and do so on its own.
The field of AI is absolutely massive and includes super basic algorithms like Dijsktra’s Algorithm for finding the shortest path in a graph or network, even though a 100% optimal solution is NP-Complete, and does not yet have a solution that is solveable in polynomial time. Instead, AI algorithms use programmed heuristics to approximate optimal solutions, but it’s entirely possible that the path generated is in fact not optimal, which is why your GPS doesn’t always give you the guaranteed shortest path.
To help distinguish fields of research, we use extra qualifiers to narrow focus such as “classical AI” and “symbolic AI”. Even “Machine Learning” is too ambiguous, as it was originally a statistical process to finds trends in data or “statistical AI”. Ever used excel to find a line of best fit for a graph? That’s “machine learning”.
Albeit, “statistical AI” does accurately encompass all the AI systems people commonly think about like “neural AI” and “generative AI”. But without getting into more specific qualifiers, “Deep Learning” and “Transformers” are probably the best way to narrow down what most people think of when they here AI today.
You do realize that every posted on the Fediverse is open and publicly available? It’s not locked behind some API or controlled by any one company or entity.
Fediverse is the Wikipedia of encyclopedias and any researcher or engineer, including myself, can and will use Lemmy data to create AI datasets with absolutely no restrictions.
To add to this insight, there are many recent publications showing the dramatic improvements of adding another modality like vision to language models.
While this is my conjecture that is loosely supported by existing research, I personally believe that multimodality is the secret to understanding human intelligence.
I am an LLM researcher at MIT, and hopefully this will help.
As others have answered, LLMs have only learned the ability to autocomplete given some input, known as the prompt. Functionally, the model is strictly predicting the probability of the next word+, called tokens, with some randomness injected so the output isn’t exactly the same for any given prompt.
The probability of the next word comes from what was in the model’s training data, in combination with a very complex mathematical method to compute the impact of all previous words with every other previous word and with the new predicted word, called self-attention, but you can think of this like a computed relatedness factor.
This relatedness factor is very computationally expensive and grows exponentially, so models are limited by how many previous words can be used to compute relatedness. This limitation is called the Context Window. The recent breakthroughs in LLMs come from the use of very large context windows to learn the relationships of as many words as possible.
This process of predicting the next word is repeated iteratively until a special stop token is generated, which tells the model go stop generating more words. So literally, the models builds entire responses one word at a time from left to right.
Because all future words are predicated on the previously stated words in either the prompt or subsequent generated words, it becomes impossible to apply even the most basic logical concepts, unless all the components required are present in the prompt or have somehow serendipitously been stated by the model in its generated response.
This is also why LLMs tend to work better when you ask them to work out all the steps of a problem instead of jumping to a conclusion, and why the best models tend to rely on extremely verbose answers to give you the simple piece of information you were looking for.
From this fundamental understanding, hopefully you can now reason the LLM limitations in factual understanding as well. For instance, if a given fact was never mentioned in the training data, or an answer simply doesn’t exist, the model will make it up, inferring the next most likely word to create a plausible sounding statement. Essentially, the model has been faking language understanding so much, that even when the model has no factual basis for an answer, it can easily trick a unwitting human into believing the answer to be correct.
—-
+more specifically these words are tokens which usually contain some smaller part of a word. For instance, understand
and able
would be represented as two tokens that when put together would become the word understandable
.
Agreed.
Nevertheless, the Federal regulators will have an uphill battle as mentioned in the article.
Neither “puffery” nor “corporate optimism” counts as fraud, according to US courts, and the DOJ would need to prove that Tesla knew its claims were untrue.
The big thing they could get Tesla on is the safety record for autosteer. But again there would need to be proof it was known.
I am a pilot and this is NOT how autopilot works.
There is some autoland capabilities in the larger commercial airliners, but autopilot can be as simple as a wing-leveler.
The waypoints must be programmed by the pilot in the GPS. Altitude is entirely controlled by the pilot, not the plane, except when on a programming instrument approach, and only when it captures the glideslope (so you need to be in the correct general area in 3d space for it to work).
An autopilot is actually a major hazard to the untrained pilot and has killed many, many untrained pilots as a result.
Whereas when I get in my Tesla, I use voice commands to say where I want to go and now-a-days, I don’t have to make interventions. Even when it was first released 6 years ago, it still did more than most aircraft autopilots.
I’m an AI researcher at one of the world’s top universities on the topic. While you are correct that no AI has demonstrated self-agency, it doesn’t mean that it won’t imitate such actions.
These days, when people think AI, they mostly are referring to Language Models as these are what most people will interact with. A language model is trained on a corpus of documents. In the event of Large Language Models like ChatGPT, they are trained on just about any written document in existence. This includes Hollywood scripts and short stories concerning sentient AI.
If put in the right starting conditions by a user, any language model will start to behave as if it were sentient, imitating the training data from its corpus. This could have serious consequences if not protected against.
I am a satellite software engineer turned program manager. This is not unexpected in this current environment, however the conditions that created the environment are abnormal.
This solar cycle is much stronger than past cycles. I’m on mobile, so I can’t get a good screenshot, but you can go here to see this cycle and the last cycle, as well as an overlay of a normal cycle https://www.swpc.noaa.gov/products/solar-cycle-progression
As solar flux increases, the atmosphere expands considerably, causing more drag than predicted. During periods of solar minimum, satellites can remain in a very low orbit with minimal station keeping. However, at normal levels of solar maximum, 5 year orbits can easily degrade to 1 year orbits. Forecasters says we are still a year away from solar maximum, and flux is already higher than last cycle’s all time high (which was also an anomalously strong cycle). So it will get worse before it gets better.
TLDR: Satellites are falling out of the sky because the sun is angy
The only upside I can think of is they’d actually start caring about the planet instead of thinking they’ll be dead in 100 years anyway.
I believe it could and should be made harder, but it is already a high barrier to purchase an investment property. For a business loan on residential housing, an investor needs 25-30% down payment for the property. Also I think the longest terms are 15 years and not 30, but I could be wrong.
All the small time landlords acquired their homes through primary residence loans which allows for PMI and smaller down payments that only exist because they are subsidized by the government. A primary residence loans either requires an owner to lie to the government and bank which puts them at serious liability in the sense they could make the loan due immediately if found out, or the owners have lived in that home for at least one year.