JavaScript is disabled
Our website requires JavaScript to function properly. For a better experience, please enable JavaScript in your browser settings before proceeding.
An AI is motivated to complete a task by its programmed objective, which is essentially a set of rules or goals designed to achieve a specific outcome based on the data it is trained on; essentially, the AI is "motivated" by the reward signal it receives when it successfully completes a task according to its predefined parameters, aiming to maximize that reward and optimize its performance
Not disagreeing, but it appears that sometimes an AI will get "creative" with how to achieve it's objective. An objective can also be poorly defined and lead to unexpected results. These things also appear to have at least some emergent properties that are not designed into them. It is my understanding that these things are effectively "grown" more than they are programmed. Is that an incorrect understanding? If so, please explain enough for me to get more information on my own if you don't mind.
 
Not disagreeing, but it appears that sometimes an AI will get "creative" with how to achieve it's objective. An objective can also be poorly defined and lead to unexpected results. These things also appear to have at least some emergent properties that are not designed into them. It is my understanding that these things are effectively "grown" more than they are programmed. Is that an incorrect understanding? If so, please explain enough for me to get more information on my own if you don't mind.
I can barely navigate this website....
 
I bought one share of Nvidia the other day when it was down about 16% on the day. I got in at close to $119. I am debating whether I should hold for the long term or take a short term gain? It's currently at $126 and change.
 
Given the quality of justices and rulings lately, I'm open to having a decent AI model have a crack at it.
Depends on who trained the AI model. If it's any of the big tech guys, no thanks. I'd want to use Everlign AI with a closed model specifically trained on specific applicable law and SCOTUS precedent. Of course that might just create a number infinite logic loops as the current laws and certain precedents don't align at all.
 
Depends on who trained the AI model.
Naturally, but it wouldn't depend on a single model. It should pull from dozens of open-source LLMs at a minimum in a cluster quorum.

Anyway, it was just a stupid off-the-cuff response, not an actual realistic (at this point) proposal.
 

Upcoming Events

Liberty Firearms and Blade Expo
  • Canby, OR
Oregon Arms Collectors March Gun Show
  • Portland, OR
Eugene Gun & Knife Show
  • Eugene, OR

New Classified Ads

Back Top