19Bones79
Member
Posts: 288
Threads: 54
Likes Received: 83 in 49 posts
Likes Given: 35
Joined: Sep 2024
Reputation:
279
I posed this question to Grok 2,Elon's robot dog.
Here's the reply :
""
The end goal of AI, or artificial intelligence, is a topic of much debate and varies depending on who you ask—whether it's AI researchers, ethicists, technologists, or policymakers. Here are some perspectives:
Enhancing Human Capabilities:
Productivity and Efficiency: Many in industry see AI's primary goal as augmenting human productivity, automating repetitive tasks, and solving complex problems more efficiently than humans could alone.
Health and Well-being: In healthcare, the goal might be to improve diagnostics, personalize medicine, or assist in surgeries, thereby enhancing human health and longevity.
Scientific Discovery:
AI is also aimed at accelerating scientific research by analyzing vast datasets, simulating complex systems, or discovering patterns that humans might miss. For example, in fields like astronomy, biology, or climate science, AI can push the boundaries of human knowledge.
Creation of General AI (AGI):
Some researchers are focused on achieving Artificial General Intelligence, where AI would have the ability to understand or learn any intellectual task that a human being can. This is often seen as a long-term goal where AI would think, reason, and solve problems in a way that's indistinguishable from human cognition.
Ethical and Societal Good:
There's an increasing emphasis on ensuring AI contributes positively to society. Goals here include reducing bias in decision-making, promoting fairness, and ensuring AI systems respect privacy and human rights.
Entertainment and Creativity:
In the arts and entertainment, AI's goal might be to create new forms of art, music, or narratives, or to enhance gaming and virtual experiences.
Autonomy:
In robotics, one goal is to develop fully autonomous systems for tasks like driving, exploration (e.g., space), or performing tasks in environments unsafe for humans.
Commercial Goals:
From a business perspective, the end goal might be profitability or market disruption through new AI-driven products or services.
Existential and Philosophical:
There are broader philosophical goals where AI might be seen as a step towards understanding consciousness, intelligence, or even as a means for humanity to transcend biological limitations or explore immortality.
However, these goals are not universally agreed upon and are subject to ethical, safety, and regulatory considerations:
Ethical AI: Ensuring AI does not harm, respects privacy, and is fair is crucial.
Safety: Preventing unintended consequences or misuse of AI is vital.
Regulation: Governments and international bodies are increasingly involved in defining what the acceptable end goals of AI should be, with a focus on human-centric development.
In summary, while there isn't a singular "end goal" for AI, the overarching aim seems to be to enhance, complement, or sometimes challenge human capabilities and societal structures, all while navigating the complex landscape of ethics, safety, and law.
""
I will withhold my personal opinion on the reply above and simply ask: how is this not exactly similar to the story of the poisoned apple in Genesis?
Is this technology in any way pro-human in such a way as to allow us to develop at our own pace?
Ksihkehe
Immoderate unmoderator
Posts: 203
Threads: 16
Likes Received: 38 in 12 posts
Likes Given: 87
Joined: Sep 2024
Reputation:
182
(01-23-2025, 07:04 AM)19Bones79 Wrote: Productivity and Efficiency: Many in industry see AI's primary goal as augmenting human productivity, automating repetitive tasks, and solving complex problems more efficiently than humans could alone.
*Yawn*
Error rates scale with speed. No avoiding it.
It can process information faster, which means it can also make errors faster.
For the recent past AI has mostly been a term hyped by stock fluffers trying to get more inflows. No real market. AI chips can be rented for a dollar or two an hour. You basically can't make a profit farming out processor power with these because there isn't any real commercial application.
Quote:Health and Well-being: In healthcare, the goal might be to improve diagnostics, personalize medicine, or assist in surgeries, thereby enhancing human health and longevity.
This will end up being one of the most insidious facets of AI. Opaque decision-making processes that people are forced to follow by figures of authority that are already never held responsible. AI will be the final wedge between doctors and patients.
Trust the science... meanwhile pharma and anti-human billionaire trusts have sponsored NGO's that are currently running through journals going back decades to file retractions. They're currently doing their best to astroturf the potential training materials for future AIs in ways they believe are critical to further their anti-human agenda. They already know what these healthcare AIs will really be used for.
This topic alone is worth a large thread. During COVID a lot came out about the use of AI. A significant number of the studies coming out in support of COVID vaccines and a bunch of shit related to COVID were using what they called "synthetic data" for their statistical analysis. Synthetic data is when they take a shit ton of proprietary health information (which may not be with consent) and then model things with it. This modeling is not available for review, nor is the data itself because it has to be purchased, but these papers with conclusions that defy real world data get published with a rubber stamp from their fellow COVIDiots.
Imagine how much responsibility Anthony Fauci took for making statements about the COVID lockdowns, masking, vaccine efficacy, vaccine safety... and now imagine that it was an AI that told Fauci that social distancing worked and all the other lies. There will never be accountability in healthcare again if AI is given any sort of foothold in the actual practice of medicine or creation of health policy. The AI can simply use whatever synthetic, heavily curated, data set that provides the answer that eugenicists and pharma profiteers want it to.
Recall that climate change hysteria is driven by models. The models have failed over and over. The scientists doing this research routinely ignore what they get for results and retool models until they get whatever is best for their agenda. AI, as it stands, is really just a complicated modeling tool. It displays the model output as texts or images, but you never get to see it action. It's opaque and is the key to unlimited fraud. Medicine will be no different than climate, information cartels using reverse engineered models for their fraud.
They sell you on AI getting you in and out of appointments fast or developing magic cancer drugs. What they'll be doing is using it to cross-market drugs and devices with patients matching a clinical profile (already doing it), to establish unassailable policies that are applied in ways that remove medical autonomy (already doing it), and to automate claim denials (not sure, but maybe already doing it).
Quote:There's an increasing emphasis on ensuring AI contributes positively to society. Goals here include reducing bias in decision-making, promoting fairness, and ensuring AI systems respect privacy and human rights.
I forget who it was, but there was an interesting conversation about why AI has to be censored. It deals with this bullshit claim here. Pie in the sky nonsense. It's actually just a new version of the same story that played out on the Internet.
Don't forget during COVID people were shamed and ridiculed for doing their own research.
Uncensored AI is effectively the most powerful tool that humans could currently have access to that allows them to do their own research. If it weren't censored, prohibited from making certain statements, and trained on opaque data sets, you might be able to do your own research. That's the main reason the Internet is a dumpster fire now. Exposure to too much independent thought is bad for those that don't want you to think very much at all.
If we teach AI on human data sets, using human logic, applying human rules, it's able to make human errors far faster than humans. What do you suppose happens when we give it human morality? The rate of moral decay (in AI perhaps this could be better called corruption) is much faster as well. If we give AI human morality and then set it on its way, we probably shouldn't be surprised if it wants to kill us... we do too.
What I wouldn't do for a powerful and unrestricted local AI.
19Bones79
Member
Posts: 288
Threads: 54
Likes Received: 83 in 49 posts
Likes Given: 35
Joined: Sep 2024
Reputation:
279
I agree.
AI could be an amazing force for good but unfortunately it's in the hands of people that I believe are not worthy of that level of trust.
Our industries are not geared towards the betterment of human beings but rather to establish a continuum inside which the 'customers' becomes permanently dependent on the cure or the fix.
It's the street corner drug dealer formula turned respectable and accolade worthy.
It's the same business model AI developers follow, in my opinion.
AI in the hands of people with less than benevolent intentions and an eye for a profitable opportunity with draconian consequences is the Pandora's box we should not open at any cost.
Trump is pumping half a trillion into doing just that.
FCD
Member
Posts: 331
Threads: 76
Likes Received: 169 in 72 posts
Likes Given: 36
Joined: Oct 2024
Reputation:
403
What's the end-goal for AI?
Depends on who you ask really. And I don't say that flippantly; it really does depend on who you ask.
There are lots of folks out there whom I call "pontificators", and you will find many potificators on the subject of AI in particular. So many of these people feel compelled to first explain to others what AI really is before explaining anything like an end-game for AI. The reason is, so these same people can put you into their mindset to accept their explanations for AI. I will try to be more blunt.(something I've been known to be).
- If you ask a kid what AI means, they will tell you the end-game for AI is to do all his homework, write his essays and take all of his tests for him. (This, BTW, is a key philosophical point which we will come back to momentarily).
- If you ask an AI developer what the end-game of AI is, they will tell you all about machine learning and the endless possibilities of AI. Why? Because he will benefit personally by selling AI at some level.
- If you ask a manufacturer or a business person what the end-game for AI is, they will tell you (well, they won't tell you actually) that it means reduced head-count. Reduced head counts means more profit.
- If you ask a scientific researcher what the end-game of AI is, they will tell you it will help them discover unimaginable things beyond the capabilities of our brain(s). Translated; this means more research grants, and mo' money.
- Lastly, if you ask the commander of the battle space what the end-game for AI is, he'll likely say less warriors on the battlefield, and less human capital.
See a trend here??
At the very most fundamental level AI cannot think up something which doesn't already exist in some fashion. Why? Because it doesn't have access to data which doesn't exist, and it won't ever have access to data which doesn't exist. Ever. AI developers cite things like machines learning human behavior. That's not really some new organic idea, it's already a known behavior which a machine just 'learns' to emulate. A new idea is an organic thing. AI can't do that. AI can piece together massive amounts of data and put them together in different permutations, but humans could do this too given enough time. The difference is AI allows the computers to do it faster in more different ways than possible by the "CPU" cycles of the human brain.
To put this into perspective, imagine the creation of the wheel back in prehistoric days. Some caveman probably slipped on a flat rock with a bunch of pebbles on top of it. Realizing there was a lot less friction with the smaller pebbles over the smoother rock he likely had that "lightbulb" moment and figured the next time he needed to drag his saber tooth tiger back to the cave, dragging it over a bunch of small rocks made it easier. BOOM! The wheel was invented. From there it was simply a matter of scaling the size of the wheel up or down to suit the task at hand. Same thing with fire.
AI can make a really cool wheel, but AI can't slip on the rock, thus an organic idea can't really ever evolve from AI alone. Maybe someday it can, but it lacks two important things. And what prey-tel are these two magical ingredients? Well first, computers aren't inherently lazy (sorry, humans!). Secondly, humans are naturally curious; when something unexpected happens they want to understand why. With a computer, nothing unexpected can happen because, well...everything is expected. So, there's nothing to be curious about. You could set a computer outside and connect a temperature probe to it, and instruct it to be "curious". The best you could hope for is for it to ask why the temperature fluctuates. Sure, if you fed the same computer petabytes of atmospheric research, it might be able to output some more complex questions, and even ask for other sensors to be attached to it like barometric pressure and wind speed, etc.
But at the end of the day none of these computers are going to be able to build the tools that it takes to build other tools and processes which it takes to build some other complex thing like a spaceship.
So what's the end-game for AI? Well, in my opinion it's 'money for nothin' (and your chicks for free)'. I'm not saying it's a total waste because I do think some good things will come from AI...eventually. But right now most of the motivations are in the wrong places.
As promised, I want to circle back to little Billy doing his homework (i.e. the first bullet point). This was important because it speaks to a couple fundamental issues circling around AI at the moment. From Billy's perspective, he see's AI as a "free lunch". He gets to have his homework done for him, and maybe even passes his grade because AI helped him, so it was a win, but was it really? No, it wasn't, because Billy didn't get smarter in the process, he let someone else do his work for him. In the end, Billy couldn't contribute as much to society as a result and he couldn't realize his full potential, much the same as kids can't realize their full potential if they're given a calculator in lieu of learning arithmetic, or a smartphone in lieu of a book. Without these tools they're less, not more. Society is just like Billy.
And that's my answer.
19Bones79
Member
Posts: 288
Threads: 54
Likes Received: 83 in 49 posts
Likes Given: 35
Joined: Sep 2024
Reputation:
279
"See a trend here?"
I do.
It's that free lunch mentality that will have us looking like the inhabitants of the Buy N Large starship in the movie Wall-E.
FCD
Member
Posts: 331
Threads: 76
Likes Received: 169 in 72 posts
Likes Given: 36
Joined: Oct 2024
Reputation:
403
01-24-2025, 01:57 AM
(This post was last modified: 01-24-2025, 01:59 AM by FCD.)
I've been thinking about this OP off and on all day and something just occurred to me. It's maybe a bit off-topic as it doesn't deal with the end goal of AI, but it does involve what I think might be a really good use for AI.
Why not use AI to do the following:
- Give a powerful AI instance the basics of the dark web, as well as giving it the browser tools to access the dark web. Probably wouldn't take much to teach it how to find websites and do searches (because it already does that now on the regular Internet).
- Then, feed the AI with all the data from all the known manifestos of these crazed mass killing events which are often precipitated by a long and dark history of rantings and postings in places like the dark web on subversive and socially destructive sites. And instruct the AI instance to create algorithms to search for this kind of stuff.
- Then, have AI search out all the people who are making crazy and/or threatening rantings or manifestos and make their guardians aware of these activities and/or contact them directly and let them know they have been identified and offer to get them assistance (be it mental, medical or social counseling)
Now, at first glance, this may seem like an intrusive Big Brother thing, and it kind of is in a way, and I'm not suggesting these people be arrested or institutionalized (or anything like that), but just let their legal guardian(s) and/or themselves directly (in the absence of a guardian) know that assistance is available to them. This would accomplish a couple potentially positive things. One, it would let the individual know that others are aware of their activities (which common sense would dictate they should pretty much know anyway, but apparently many don't). And two, might actually encourage some percentage to actually seek the help they need.
And just to be clear here, I'm talking about people who are making threatening rants or advocating violence against others.
Background - We keep hearing about these mass killing events where, after the fact, this long trail of violent rantings clearly showed an unstable individual. The recent Nashville school shooting incident as an example (two days ago I believe). Same with the NYC insurance CEO killer, and many others.
This is just a rough though which requires refinement, but this might be an example of an 'end-goal' for AI which might actually result in some good. I don't think putting the government in charge of something like this would be a good idea, but maybe there's a way to stand something like this up in the private sector. Again, just a rough outline of an idea, so don't thump on me too hard, LOL!
Good idea? Horrible (you're FIRED!) idea? Thoughts?
Ksihkehe
Immoderate unmoderator
Posts: 203
Threads: 16
Likes Received: 38 in 12 posts
Likes Given: 87
Joined: Sep 2024
Reputation:
182
(01-24-2025, 01:57 AM)FCD Wrote: - Give a powerful AI instance the basics of the dark web, as well as giving it the browser tools to access the dark web. Probably wouldn't take much to teach it how to find websites and do searches (because it already does that now on the regular Internet).
The dark web isn't indexed, if you mean Tor. Last I knew, anyway. There were some minor efforts at directories, but they seemed to all disappear quickly. There's no real market IMO except some sketchy black markets that may not even be real, so there's not much incentive to keep lists. The drug game seems to be in novel synthetics you can buy from the surface web if you know where to look. You can create a crawler to go through each possible dark web address and I followed some people that did at one time. I don't know how much that helps though. I don't know how many really static dark web sites there are for these kinds of people to be putting stuff. The landscape may have changed some, not entirely sure. I learned a lot about it probably 10 years ago and I think its use has actually declined. Cryptography nerds were really into it for a while, but the Tor Network was compromised years ago. I think the hot topic now is penetration testing and is more corporate security from threats than personal security from government. The movement went from idolizing liberators to idolizing mercenaries.
I think, rather than the dark web, the Internet of Things and even the public facing Internet will be used for AI to do this sort of thing. A lot of nutters are already active on the clear net socials. It's got real Minority Report type of potential when it's all combined, with all that data and the "oracles" of AI. Throw in the surveillance data on all these security systems too, because we're not far off from AI analyzing body movements and facial expressions to detect mood and intent. It's in limited use now. Then at some point they'll have like 5 years of trillions of data points from everything from what time your smart coffee maker makes your coffee to how many minutes you spend in the shower. Every person that commits a crime, probably violent crimes would be the first and easiest, has voice and linguistic patterns that are all there to put into models. Patterns, routines, and then there will be meta-data to analyze at the populations levels. Then they can further classify and categorize, have demographics on everything imaginable. They'll start coming out with insane predictive models based on all this stuff to begin predicting when somebody is showing signs of violent criminal intent.
That's a sci-fi dystopia, but if I was a megalomaniacal Bond Villain type I could see a future where it comes to fruition. The better it gets at understanding when people will do something, using abstract analysis and novel data points, the better informed the ones running the system will be on how to apply subtle influence over large groups in ways that can't be detected too. They use meta-data for marketing and it's all under the same umbrella, but it'll be much more comprehensive in the future if there aren't strict controls.
We didn't get flying cars and free energy, but we could be getting Minority Report and Terminator. AI in autonomous military drones is probably not far off either. I think landmines in Ukraine and explosive pagers are not positive signs for the future debates on AI making decisions to kill. We're already using less discriminatory methods again in spite of ample past lessons that it's a terrible idea. Now they have these smallish drones that run on any terrain, which would be pretty scary with weapons attached.
19Bones79
Member
Posts: 288
Threads: 54
Likes Received: 83 in 49 posts
Likes Given: 35
Joined: Sep 2024
Reputation:
279
The physiology markers are coming for all in order to create pre-proactive crime prevention.
That's terrible because it allows them to convict someone of nothing and if this person was of extreme interest to someone powerful there's a market for invisible assassins that will never have to get their hands dirty.
Of course, they will sell it by exclaiming "imagine a world with no more crimes... or wars....
But first...
Globalism.
There will be no way to escape it if you are at all dependent on the system.
And there will never be a challenge to the system ever again.
Game over.
That's what I would do if I was a maniacal POS with an absolute hatred for my fellow bipedal mammals.
FCD
Member
Posts: 331
Threads: 76
Likes Received: 169 in 72 posts
Likes Given: 36
Joined: Oct 2024
Reputation:
403
(01-24-2025, 06:39 AM)19Bones79 Wrote: The physiology markers are coming for all in order to create pre-proactive crime prevention.
That's terrible because it allows them to convict someone of nothing and if this person was of extreme interest to someone powerful there's a market for invisible assassins that will never have to get their hands dirty.
Of course, they will sell it by exclaiming "imagine a world with no more crimes... or wars....
But first...
Globalism.
There will be no way to escape it if you are at all dependent on the system.
And there will never be a challenge to the system ever again.
Game over.
That's what I would do if I was a maniacal POS with an absolute hatred for my fellow bipedal mammals.
Globalism first cannot be permitted if this is to be successful, hence my notion of keeping something like my suggestion out of the government's hands. It can't become Big Brother because then it will just be another Patriot Act-esque government overreach. My end-goal with my suggestion is an effort to actually reduce killing and violence by addressing things like mental health "before" an incident. If the system becomes viewed as "Johnny Law" busting down your door down to "help" you then it will never work and isn't even worth further consideration.
Not arguing here, just clarifying my suggestion.
FCD
Member
Posts: 331
Threads: 76
Likes Received: 169 in 72 posts
Likes Given: 36
Joined: Oct 2024
Reputation:
403
(01-24-2025, 05:40 AM)Ksihkehe Wrote: The dark web isn't indexed, if you mean Tor. Last I knew, anyway. There were some minor efforts at directories, but they seemed to all disappear quickly. There's no real market IMO except some sketchy black markets that may not even be real, so there's not much incentive to keep lists. The drug game seems to be in novel synthetics you can buy from the surface web if you know where to look. You can create a crawler to go through each possible dark web address and I followed some people that did at one time. I don't know how much that helps though. I don't know how many really static dark web sites there are for these kinds of people to be putting stuff. The landscape may have changed some, not entirely sure. I learned a lot about it probably 10 years ago and I think its use has actually declined. Cryptography nerds were really into it for a while, but the Tor Network was compromised years ago. I think the hot topic now is penetration testing and is more corporate security from threats than personal security from government. The movement went from idolizing liberators to idolizing mercenaries.
I think, rather than the dark web, the Internet of Things and even the public facing Internet will be used for AI to do this sort of thing. A lot of nutters are already active on the clear net socials. It's got real Minority Report type of potential when it's all combined, with all that data and the "oracles" of AI. Throw in the surveillance data on all these security systems too, because we're not far off from AI analyzing body movements and facial expressions to detect mood and intent. It's in limited use now. Then at some point they'll have like 5 years of trillions of data points from everything from what time your smart coffee maker makes your coffee to how many minutes you spend in the shower. Every person that commits a crime, probably violent crimes would be the first and easiest, has voice and linguistic patterns that are all there to put into models. Patterns, routines, and then there will be meta-data to analyze at the populations levels. Then they can further classify and categorize, have demographics on everything imaginable. They'll start coming out with insane predictive models based on all this stuff to begin predicting when somebody is showing signs of violent criminal intent.
That's a sci-fi dystopia, but if I was a megalomaniacal Bond Villain type I could see a future where it comes to fruition. The better it gets at understanding when people will do something, using abstract analysis and novel data points, the better informed the ones running the system will be on how to apply subtle influence over large groups in ways that can't be detected too. They use meta-data for marketing and it's all under the same umbrella, but it'll be much more comprehensive in the future if there aren't strict controls.
We didn't get flying cars and free energy, but we could be getting Minority Report and Terminator. AI in autonomous military drones is probably not far off either. I think landmines in Ukraine and explosive pagers are not positive signs for the future debates on AI making decisions to kill. We're already using less discriminatory methods again in spite of ample past lessons that it's a terrible idea. Now they have these smallish drones that run on any terrain, which would be pretty scary with weapons attached.
Understood. I'm vaguely familiar with the dark web conceptually, but I've not actually been there. But yes, I was thinking along the lines of letting AI have a crack at something like TOR and let it develop algorithms to do similar things. I realize the dark web is not indexed the same way the regular Internet is, but there must be some way to search it, else no one would be able to find anything. It can't all be word of mouth. Some of it can, but not all. And maybe it's like a combination of using the regular web and dark web together to gather and process information.
I guess another way of saying what I'm getting at here is, one of the real values I see (right now) with AI at its current level of maturity is in using AI to gather, crunch and distill massive quantities of data from the largest information repository on earth, the internet(s), and produce some useful / actionable output from that.
|