Yeah, I'm sorry to say this but virtually every company is a vector for AI indoctrination. I do not believe the AI market bubble will pop any time soon, not until people make (and then fail to control) AGI.
Yeah, I'm sorry to say this but virtually every company is a vector for AI indoctrination. I do not believe the AI market bubble will pop any time soon, not until people make (and then fail to control) AGI.
I don't really understand this sentiment. It absolutely is possible. My concern is this willingness to invest in AI blindly (like, why exactly is Cygames investing into their own AI firm?), and AI firms themselves having a concerning frequency to treat (or desire to treat) their training pool as equivalent to livestock. It's disgusting in my opinion, and it's one of the many reasons I can't trust the people behind AI models.
Same people that are perfectly fine with the normalization of gambling by companies to make more money, malding out over companies exploring new technologies to make more money. At least AI tech actually has practical applications when used well, as opposed to just being pure degeneracy. They're not even saying they're gonna apply that shit to their current games. People have genuinely become such toddlers on this subject.
Same people that are perfectly fine with the normalization of gambling by companies to make more money, malding out over companies exploring new technologies to make more money. At least AI tech actually has practical applications when used well, as opposed to just being pure degeneracy. They're not even saying they're gonna apply that shit to their current games. People have genuinely become such toddlers on this subject.
Gonna completely ignore the damage A.I is doing to the economy, especially the tech industry and making everything substantially more expensive. The damage it is doing to education, critical thinking, the reliability of information, privacy issues, art, culture, etc. If only it were just exploring new technologies to make money. Some people are genuinely blind on this subject.
At the very least, Umamusume can make millions of people happier regardless of whether or not they're engaging with the gacha. FTP players exist. People who engage with the content but not the game exist. The money people drop into the gacha can also be redistributed to new anime, movies, and other content.
Same people that are perfectly fine with the normalization of gambling by companies to make more money, malding out over companies exploring new technologies to make more money. At least AI tech actually has practical applications when used well, as opposed to just being pure degeneracy. They're not even saying they're gonna apply that shit to their current games. People have genuinely become such toddlers on this subject.
One's a poor financial choice that someone could reasonably avoid with free will and one's a technology that's becoming an increaingly unavoidable part of global policing and survielance infrastructure but okay.
Main issue here is what you stated in the middle part. There are non-generative AI types called discriminative/predictive AI that genuinely does a lot of good in medical field when professional doctors use it as a diagnostic tool rather than a shortcut. They also don't come with the same environmental and ethical baggage Generative AI does when implemented properly. But, GenAI companies point to those models and go "See, look! We're AI too! We're just like them!" And then go and make the PRISMv2 or Pedotron 9000. If you are genuinely supportive of AI, you should be severely offended at the whitewashing of unethical implementations by hiding behind ethical ones, tarnishing the reputation of both, rather than at the people rejecting AI in its entirety because CEOs and investors have intentionally muddied the waters.
One's a poor financial choice that someone could reasonably avoid with free will and one's a technology that's becoming an increaingly unavoidable part of global policing and survielance infrastructure but okay.
Main issue here is what you stated in the middle part. There are non-generative AI types called discriminative/predictive AI that genuinely does a lot of good in medical field when professional doctors use it as a diagnostic tool rather than a shortcut. They also don't come with the same environmental and ethical baggage Generative AI does when implemented properly. But, GenAI companies point to those models and go "See, look! We're AI too! We're just like them!" And then go and make the PRISMv2 or Pedotron 9000. If you are genuinely supportive of AI, you should be severely offended at the whitewashing of unethical implementations by hiding behind ethical ones, tarnishing the reputation of both, rather than at the people rejecting AI in its entirety because CEOs and investors have intentionally muddied the waters.
Good example of Predictive AI? WatsonX, by IBM, which you can see in action during the US Open as it summarizes matchups between players based on parameters and data points it collects over several years of its operation. However, if you use the chatbot module in the US Open app and ask it like it were ChatGPT, you'll find its answers to be less than satisfactory.
I don't really understand this sentiment. It absolutely is possible. My concern is this willingness to invest in AI blindly (like, why exactly is Cygames investing into their own AI firm?), and AI firms themselves having a concerning frequency to treat (or desire to treat) their training pool as equivalent to livestock. It's disgusting in my opinion, and it's one of the many reasons I can't trust the people behind AI models.
Why are people downvoting you? Do they think that your message is pro-AI?
Leave a comment