As we manoeuvre deep into the 21st 100 , we ’re start to enamour a glimpse of the grotesque technological hypothesis that await . But we ’re also commence to get a low-spirited sense of the potential horrors . Here are 10 frightening technologies that should never , ever , get into macrocosm .
As I was put this list together , it became obvious to me that many of the technologies delineate below could be put to tremendously good use . It was important , therefore , for me to make the eminence between a technology per se and how it might be put to ill economic consumption . Take nanotechnology , for example . Once developed , it could be used to terminate scarcity , clean - up the environment , and rework human biology . But it could also be used to destroy the planet in fair light ordering . So , when it comes time to train these futurist technologies , we ’ll have to do it safely and responsibly . But just as importantly , we ’ll also have to recognize when a exceptional ancestry of technical enquiry is simply not worth the benefits . Artificial superintelligence may be a potent example .
That said , some technologies are objectively malefic . Here ’s what Patrick Lin , the director of the Ethics + Emerging Sciences Group at California Polytechnic State University , told io9 about this :

The musical theme that technology is neutral or amoral is a myth that needs to be dispelled . The house decorator can imbue ethics into the creation , even if the artifact has no moral agency itself . This characteristic may be too subtle to notice in most cases , but some engineering science are accept from evil and do n’t have redeeming employment , for instance , gun bedchamber and any devicehere . And even without that full point ( whether applied science can be intrinsically good or unfit ) , everyone agrees that most technologies can have both good and bad uses . If there ’s a heavy likeliness of bad uses than good ones , then that may be a intellect not to explicate the technology .
With all that out of the way , here are 10 bone - cool technology that should never be allowed to exist ( list in no particular order ):
1. Weaponized Nanotechnology
Nothing could terminate our reign here on Earth faster than weaponized — or severely botched — molecular assembling nanotechnology .
It ’s a menace that stems from two highly powerful force : uncurbed ego - replication and exponential growth . A sufficiently nihilistic government , non - state actor , orindividualcould orchestrate microscopic machines that consume our planet ’s critical resources at a speedy - flaming rate while replicate themselves in the appendage and leaving useless bi - products in their Wake Island — a residue futurist like to call “ hoary goo . ” ( effigy : aspect from The Animatrix )
https://gizmodo.com/could-a-single-individual-really-destroy-the-world-1471212186

Nanotechnology idealogue Robert Freitas has brainstorm several potential variations of planet - killing nanotech , including aerovores ( a.k.a . grey dust ) , grey plankton , grey lichens , and so - call biomass killers . Aeorovores would blot out all sun , grey plankton would lie in of seabed - grown replicators that feed up land - establish carbon - rich ecology , grey lichens would destroy nation - base geology , and biomass killers would attack various being .
According to Freitas , a bad cause scenario of “ global ecophagy”would take about 20 month , “ which is plenty of advance word of advice to mount an good defense . ” By defence , Freitas is referring to countermeasures likely involve self - replicating nanotechnology , or some kind of system that disrupts the internal mechanisms of the nanobots . Alternately , we could set up up “ active cuticle ” in progress , though most nanotechnology experts consort they ’ll be useless . Consequently , a moratorium on weaponized nanotechnology should be establish and implement .
2. Conscious Machines
It ’s generally taken for granted that we ’ll finally imbue a machine with artificial cognisance . But we need to think very badly about this before we go ahead and do such a affair . It may really bevery cruel to build a functional brain inside a estimator — and that perish for both animate being and human emulations . ( range of a function : Bruce Rolff / shutterstock )
https://gizmodo.com/would-it-be-evil-to-build-a-functional-brain-inside-a-c-598064996
Back in 2003 , philosopher Thomas Metzinger argued that it would behorrendously unethical to educate software system that can hurt :

What would you say if someone add up along and tell , “ Hey , we want to genetically engineer mentally retarded human baby ! For ground of scientific progress we need babe with sealed cognitive and emotional deficit in monastic order to analyze their postpartum psychological development — we urgently call for some funding for this authoritative and modern variety of research ! ” You would certainly think this was not only an absurd and dismaying but also a severe melodic theme . It would hopefully not croak any ethics committee in the popular humanity . However , what today ’s ethic committees do n’t see is how the first machine fulfil a minimally sufficient set of constraint for witting experience could be just like such mentally retarded infant . They would suffer from all kinds of functional and representational deficits too . But they would now also subjectively go through those deficits . In summation , they would have no political lobby — no congressman in any ethics committee .
Futurist Louie Helm agrees . Here ’s what he told me :
One of the in force thing about computers is that you could make them add together a million columns in a spreadsheet without them getting resentful or bored . Since we plan to utilize artificial intelligence in place of human intellectual Labour , I think it would be immoral to deliberately program it to be witting . immobilise a witting being inside a machine and forcing it to do work for you is isomorphous to slaveholding . Additionally , consciousness is probably really thin . In humanity , a few miscoded genes can cause Down Syndrome , schizophrenia , autism , or epilepsy . So how terrible would it find to be a slightly misprogrammed chassis of consciousness ? For instance , several well - fund AI developers require to recreate human intelligence activity in machines by model the biologic structure of human brain . I sort of hope and carry that these skinny - term attempts at cortical simulations will be too rough-cut to really work . But to the extent that they do employment , the first “ achiever ” will likely make cripplingly unpleasant or otherwise deranged states of immanent experience . So as a computer programmer , I ’m in the main against ego - cognisant artificial intelligence activity . Not because it would n’t be cool . But because I ’m just morally react to slavery , torture , and unnecessary code .

3. Artificial Superintelligence
As Stephen Hawking declared earlier this class , unreal intelligence activity could be our worst mistake in history . Indeed , as we ’ve noted many times before here on io9 , theadventof greater - than - human intelligence could provecatastrophic . The introduction of organisation far faster and smarter than us would storm us to take a back seat . We ’d be at the mercifulness of whatever the artificial superintelligence make up one’s mind to do — andit ’s not immediately clear that we ’ll be capable to design a well-disposed AI to prevent this . We need to figure out this problem , otherwise building an ASI would be absolutely nuts . ( trope : agsandrew / shutterstock )
https://gizmodo.com/stephen-hawking-says-a-i-could-be-our-worst-mistake-in-1570963874
https://gizmodo.com/how-artificial-superintelligence-will-give-birth-to-its-1609547174

https://gizmodo.com/how-much-longer-before-our-first-ai-catastrophe-464043243
https://gizmodo.com/can-we-build-an-artificial-superintelligence-that-wont-1501869007
4. Time Travel
I ’m actually not much of a believer in fourth dimension change of location ( i.e. where are all the time traveller ? ) , but I will say this — if it ’s potential , we ’ll want to outride the hell away from it .
It would be so crazily dangerous . Any scifi motion picture dealing with polluted timelines should give you an idea of the possible perils , especially those filthy paradoxes . And evenif some shape of quantum fourth dimension travel is possible — in which completely new and discreet timelines are create — the ethnic and technological commutation between disparate civilisation could n’t maybe end well .
5. Mind Reading Devices
The prospect exists for machines that can read people ’s thoughts and memory board at a distance and without their consent . This likely wo n’t be possible until human brains are more intimately integrated within the web and other communication channels .
Last yr , for example , scientists from the Netherlands used brain scan datum and computer algorithms to determine which letters a person was take care at . The breakthrough hinted at the potential for a third political party to remodel human thoughts at an unprecedented level of item , including what we see , think , and remember . Such devices , if used en masse by some sort of totalistic authorities or police state of matter , would make lifespan intolerable . It would introduce an Orwellian world in which our “ think crime ” could actually be enforced . ( double : Radboud University Nijmegen )
6. Brain Hacking Devices
Relatedly , there ’s also the potential for our creative thinker to be altered against our noesis or consent . Once we have silicon chip in our brain , and assuming we wo n’t be capable to develop effective cognitive firewall , our minds will be divulge to the Internet and all its evil .
Incredibly , we ’ve already take the first steps toward this end . of late , an international team of neuroscientists set up an experiment that tolerate participants toengage in mentality - to - brain communication over the Internet . for certain , it ’s exciting , but this technical school - enable telepathy could open a pandora ’s box of problems . Perhaps the best — and scariest — treatment of this possibility was portrayed in Ghost in the Shell , in which an artificially intelligent hacker was open of change the memories and intentions of its victims . Now guess such a matter in the hands of organized crime and paranoid governments .
https://gizmodo.com/technologically-assisted-telepathy-demonstrated-in-huma-1630047523

7. Autonomous Robots Designed to Kill Humans
The possible forautonomous killing machinesis a shivery one — and perhaps the one item on this list that ’s already an topic today .
https://gizmodo.com/the-case-against-autonomous-killing-machines-5920084
Here ’s what fantast Michael LaTorra told me :

We do not yet have a automobile that present general intelligence even close to the human stage . But human level intelligence is not required for the military operation of self-governing golem with deadly capabilities . work up robotic military vehicles of all sorts is already achievable . Robot tank car , aircraft , ships , submarines , and mechanical man - shaped soldiers are possible today . Unlike remote - ensure drones , military robots could identify targets and destroy them without a human give the final club to tear . The peril of such technology should be obvious , but it go beyond the immediate threat of “ friendly fire ” incidents in which automaton mistakenly bolt down masses from their own side of a battle , or even impeccant civilians . The outstanding peril lurks in the external arms race that could be set off if any state deploys autonomous military robots . After a few cycle of improvement , the slipstream to evolve ever more herculean military robots could thwart a threshold in which the in style coevals of self-governing military automaton would be able to outfight any human being - controlled military system . And then , either by fortuity ( “ Who get laid that Artificial Intelligence could emerge ad libitum in a military golem ? ” ) or by intent ( “ We did n’t think hacker could re - programme our military robots remotely ! ” ) humans might find itself squelch into subservience , like the helot striver of Spartan AI overlords .
8. Weaponized Pathogens
This is another bad one that ’s disturbingly topical . As noted by Ray Kurzweil and Bill Joy back in 2005,publishing the genome of deadly virus for all the world to see is a recipe for death . There ’s always the hypothesis that some idiot or a fanatical group will take this information and either reconstruct the computer virus from scratch or modify an existing computer virus to make it even more virulent — and then release it onto the world . It has been judge , for example , thatthe engineered Avian Flucouldkill one-half of the world ’s humans . Just as disturbingly , researchers from Chinacombined bird and swine flus to make a mutant airborne computer virus . The thought , of course , is to know the enemy and prepare possible countermeasure before an real pandemic strike . But there ’s always the peril that the computer virus could escape from the lab and make for havoc in human populations . Or that the virus could be weaponized and unleashed . There ’s even the scary potential for weaponize genome specific computer virus .
https://gizmodo.com/nature-goes-ahead-and-publishes-study-explaining-how-to-5907096
https://gizmodo.com/engineered-avian-flu-could-kill-half-the-worlds-humans-5863078

https://gizmodo.com/scientists-combine-bird-and-swine-flus-to-create-mutant-493164638
It ’s time for authorities to part guess about this grim possibility before something awful happen . As reported in Foreign Policy , ISIS is sure enough one groupthat alreadyappears ready and willing .
http://complex.foreignpolicy.com/posts/2025-02-19/is_the_isis_laptop_of_doom_an_operational_threat_0

9. Virtual Prisons and Punishment
What willjails and penalisation be likewhen people can live for one C or thousands of years ? And what if prisoners have their minds upload ? Ethicist Rebecca Roacheoffers these horrifying scenarios :
The benefits of … radical life-time sweetening are obvious — but it could also be rein to increase the rigour of punishments . In type where a thirty - yr life sentence is judged too indulgent , convicted criminals could be sentence to receive a living sentence in conjunction with lifespan sweetening . As a result , life imprisonment could think several hundred years rather than a few decades . It would , of course of instruction , be more expensive for society to support such sentences . However , if lifespan sweetening were widely uncommitted , this cost could be offset by the increased contributions of a longer - lived work force .
… [ Uploading ] the mind of a convicted felon and running it a million times faster than normal would turn on the upload crook to serve a 1,000 year sentence in eight - and - a - half hours . This would , evidently , be much cheaper for the taxpayer than extending criminals ’ lifespans to enable them to function 1,000 age in real time . Further , the eight - and - a - one-half time of day 1,000 - class sentence could be accompany by a few hours ( or , from the pointedness of survey of the criminal , several hundred years ) of treatment and reclamation . Between dayspring and sunset , then , the nauseating criminals could serve a millenary of laborious Labor and return fully rehabilitated either to the genuine human race ( if technology facilitates transferring them back to a biological substrate ) or , perhaps , to expatriate in a estimator simulated world .

That ’s awful ! Now , it ’s important to note that Roache is not advocating these punishment method — she ’s just doing some foresight . But holy smokes , let ’s never EVER twist this into a reality .
10. Hell Engineering
This one ’s quite similar to the premature token . Some futurists make the type for paradise engineering — the use of in advance technologies , particularly cognizance uploading and practical world , to make a paradise on Earth . But if you may create heaven , you may make hell . It ’s a prospect that ’s especially chilling when you consider lifespans of indefinite length , along with the most limitless possibilities for psychological and forcible anguish . This is in reality one of the bad thing I can intend of ; why anyone would want to germinate such a thing is beyond me . It ’s yet another reason for banning the development of artificial superintelligence — and the onset of the so - calledRoko ’s Basilisk problem .
foresightFuturismScience
Daily Newsletter
Get the best technical school , science , and culture intelligence in your inbox daily .
tidings from the future , delivered to your present tense .
Please select your desired newssheet and submit your email to upgrade your inbox .

You May Also Like















