Haha. Okay. Let's try an analogy.
My doctor tells me I have cancer that will become terminal in 6 months if I don't have treatment. My doctor is an expert in the field of medicine who has studied and worked for years to gain that expertise. In medicine. I respect his/her expertise. So I go home and consider his/her advice.
Treatment is going to be costly so I contact my financial advisor and have a frank discussion about the state of my finances and whether or not I can afford treatment. He/she is an expert in personal finance and has become so after years of study and work in that field. He/she has never steered me wrong an I trust the assessment they give me.
While I'm on the phone with the financial advisor, a fireman knocks on my door. He/she says, "Ma'am, the Lookout Mountain Fire has broken through our control efforts and is heading this way. We're evacuating everyone south of the lake. That fireperson is an expert in fire control and catastrophes. They might not have gone to school like the doctor and the financial advisor, but they've woekes in the field for years, they've drilled and practiced, they're there on people's worst days. They're experts too. People who don't respect the expertise of firepersons tend to wind up dead.
Okay, philosopher, you tell me. Which is the most pressing concern? Which expert should I focus on, right here, right now?
You've got one set of experts having kittens about AI based, among other things, on the the unproven assumptions that it even can become self aware, that it will inevitably turn on humanity and destroy us, and that once the genie is out of the bottle, there's no way to stop it.
They're experts in their field. They're not experts in every field.
Other experts have pointed out that AI required power that must be generated by us lowely humans, computer components, which must be built by us lowely humans, and information storage and maintenance, which must also be peovided by us lowely humans.
In other words, AI depends upon a functional and stable civilization to exist.
Many many experts in other many fields are tracking other things that affect that functional and stable civilization. Experts from that list and across all walks of life.
They're saying we're nearing limits to growth on several fronts, that we may be in a global famine in less than 20 years, that we may not be able to continually rebuild from the damage of climate disasters, that there will be mass human migration to escape sea level rise and a hotter Earth very soon, that were running out of fresh water, that we should expect more catastrophic fires and floods, were entering an age of pandemics, and so forth and so on.
All of these things are going to cause instability and turmoil. A functional civilization is not a given for much longer. We don't seem to have the will to change how we do things in order to stabilize our near future enough for AI to grow out of it's cradle.
All things aren't equal, like your AI experts assume.
That's the benefit of a broader and generalized education. You can still see the forest through the trees and outside the rabbit hole of assumptions, like that men and women are enemies in the first place, that AI will want to destroy humanity, that AI even can become self aware, and that we won't pull the plug and cut the electricity if it does turn on us, or have built in a kill switch if some kind, etc.
I gave you a link to B, who is a professional expert in their field btw, because you were already on Medium, for convenience, and as I said, he/she has a way of being able to lay all the intracasies of collapse out in layman terms. But by all means, definitely check out the experts in multiple fields. IPCC has recently released a report created by experts predicting a worldwide famine by 2040.
Do you think AI is going to be out of it's diapers and out to get us by then? What's your certainty on that? What unproven assumptions are you you basing that on?
Huh?