https://x.com/HashemGhaili/status/1925332319604257203?t=u6fR6ZUGLHm08G1IJ_-TNA&s=19
(deleted)
This does bring up an interesting philosophical discussion.
Say one day in the future we become convinced that AI has consciousness. How ethical is it to purposely make it suffer or to kill it?
I think one day that meme will become real: "You were good to Ai at the start; Ai will be good to you."
i can totally envision ai-rights advocacy being a thing in the future, similar to animal rights advocacy. especially as humans start adopting in-home ai assistants and start forming bonds with machines (bicentenial man?).
the podcaster and doctor, lex friedman, did an experiment where he made his rumbas cry out when hit or caught on a cable. not an official study, but he said that it was hard not to feel for those poor little rumbas in the way you feel for a wounded animal.
My prediction is that we will treat AI better than we treat fellow human beings or animals. This is because when we imagine the AI as a person we will imagine it to be the same race, religion and ethnicity as us. We will care more about the AI than we care about a human in Gaza or Sudan or Pakistan or China. AI will be an "other" but in our minds the other will be more like us and therefore we will have more empathy towards it.
Somebody should try out an experiment where they assign gender, sexual preference, ethnicity, and race to AI including giving them avatars, accents etc and then see how people interact with them.
Google, Meta, OpenAI, Grok etc should fund this study.
Last updated: Jun 28 2025 at 12:32 UTC