5G Is the Kill Shot

5G, 5G, 5G……It’s supposed to be the best thing since sliced bread.

Take a look at this short video below, which presents:

IEEE Propaganda: Everything You Need to Know About 5G

A key quote from the video above is “but what exactly is a 5G network? The truth is that experts can’t tell us what 5G actually is because they don’t even know yet . . .”

“So, why is the FCC voting on regulations with NO CONGRESSIONAL MANDATE for a yet-to-be-defined technology?

Never underestimate the lobbying power of BIG TELECOM MONEY in Washington, DC or in our State Capitols, to execute the Republican/ALEC agenda in an era when MONEY = FREE SPEECH. We are witnessing a disaster that rapes and pillages the public interest to further enrich our already-obscenely-profitable and monopoly-power-drunk Telecom firms which are grabbing for all they can get during this short-lived Trump administration.”(1)

“Since much of the US population is unwisely bathing itself 24/7/365 in data-carrying, pulsed, Radio-Frequency Microwave Radiation (RF/MW radiation), it is important to understand what we are actually doing to ourselves, our loved ones and our environment. Duration of exposure to these micro-second pulses of electrical power, not intensity, is the most important factor.

Therefore, always-on wireless infrastructure antennas are hazardous to one’s health — that is why wireless telecommunications base station antennas belong 200 feet off the ground and at least 2,500 feet away from homes, schools, hospitals, public buildings, parks and wilderness areas. That is why installation of so-called “Small Cell” Distributed Antenna Systems (DAS) anywhere near second-story bedroom windows is a disaster — no matter what government guideline is quoted to justify this assault.

The common thread that ties of WiFi, 4G and 5G together is the reliance on OFDM/OFDMA modulation — sophisticated mathematical transformations that pack huge amounts of digital data onto carrier microwaves in order to transmit the data through the atmosphere. The pulsed microwaves either penetrate or reflect off of anything in their path: any flora, fauna and man-made structures you can imagine, including human adults and children, pets, and already threatened species of pollinators birds, bees and butterflies.

What is WiFi and why should you turn off your Wireless Router/Access Point?

Pulsed Microwave Radiation is the Foundation of Wireless Mobile Data. Microwave Radiation, when used to transfer data from Point-A to Point-B, is comprised of micro-second pulses of electrical power sprayed through the atmosphere. Microwaves either penetrate or reflect off of anything or anyone in their path, The data transferred are electrical pulses or ”bullets” traveling at the speed of light, about 670 million miles per hour. These pulses of data can cause biological harm to many living organisms, including humans.

How Do Microwaves Radiate Over Long Distances?

Electromagnetic waves are produced whenever charged particles are accelerated. In the near-field region (within 3-4 wavelengths from the source antenna charges), waves are incoherent, erratic and choppy with high micro-second peaks of Electric and Magnetic fields.  This creates a toxic “hell-stew” of powerful zaps, crackles and pops that are difficult to characterize with any degree of accuracy.  Unfortunately, this is the range where people typically hold their wireless devices. Whenever one sends/receives digital data wirelessly from their device, a toxic, spherical cloud 36″ to 48″ in diameter forms around the device, exposing everyone nearby to peaks of RF/MW radiation.

Specific Absorption Rate (SAR) is neither an accurate nor a scientific measure of the hazards created by this mixture of Electric and Magnetic fields in the near-field region. SAR is a misleading ‘average of an average’, designed to hide the peaks of Electric and Magnetic power that surround one’s device. These peaks of power interrupt the sensitive electrical signals of our body and causes DNA and neurological damage, suppress the immune system, and interrupt hormone production and regulation.

How Do Microwaves Send and Receive Digital Data?

A wavelength carries massive numbers of erratic pulses of digital data, that wirelessly transmit text, image, audio and video data to and from computers, tablets, phones and the (predicted) billions of IoT (Internet of Things) machines, appliances, “things,” sensors and devices. Unfortunately, the microwaves used for this purpose are not the smooth sine waves you may have learned about in text books that describe the transmission of visible light (430 to 770 THz) or Alternating Current electrical power (60 Hz).

Natural Electromagnetic Fields (EMF) come from two main sources: the sun, and thunderstorm activity, with the Earth’s Schumann Resonance centering around 7.83 Hz. Being Man-made, pulsed RF/MW radiation differs from natural EMF. Man-made Radio-Frequency Electromagnetic Fields (RF/EMF) and the resulting RF/MW radiation are defined by the equation: c = f λ, where c = the speed of light, f = the frequency, and λ = the wavelength. This means that since c is a constant, as frequency increases, wavelength decreases. Frequency is measured in a unit called Hertz, which represents the number of cycles or oscillations of a wave in one second. The unit Hertz is named after Heinrich Rudolph Hertz a German scientist who first demonstrated that electromagnetic waves radiate at a constant speed.

In order to transmit digital data, an antenna’s microchips distort the waves’ shape or pace to modulate (encode) the data stream onto the carrier waves at the source before the antenna transmits them. At the destination, other microchips demodulate (decode) the data stream so the destination device can display the text/image or play the audio/video. A modem is a device that literally modulates and demodulates data streams; engineers shortened the name to modem. Each antenna in this scheme is a two-way microwave transmitter/receiver.

There are an infinite number of combinations of wavelength, frequency, intensity and modulation, the mathematical transformations that encode data onto a carrier wave. Each combination is a new digital fingerprint that uniquely identifies a new man-made toxic agent that, when transmitted into the air, instantly fills our homes, schools, workplaces or public spaces.

Microwaves have different properties, depending on their wavelength. The longer waves (20″ down to 5″) travel further and penetrate deeper into buildings and living tissue. The shorter waves (0.5″ down to 0.1″) are called millimeter waves (mm-waves) because they measure from 10 mm (at 30,000 MHz = 30 GHz), down to 1 mm (at 300,000 MHz = 300 GHz). The mm-waves are not as efficient because they don’t travel as far, tend to reflect off of buildings, and deposit mainly into the eyes and skins of living organisms.”(3)

A key target for 5G frequency use is 60 GHz:

William Webb: The 5G Myth And why Consistent Connectivity is the Goal

Some have suggested that 5G will be whatever interesting developments happen from 2018 onwards. There is much political capital expended in claiming that 5G will be deployed early in a country. MNOs and Governments will simply claim from 2018 onwards that they have 5G even though all that has been deployed is evolved 4G. For all the debate, 5G could be a label, not a technology.

End users will be told that they now have 5G even though they have not changed their handsets nor received any improvements in service. Or alternatively, in the case where 5G is the label applied to the introduction of IoT, consumers may be told that 5G is not about their handset but about the ability to connect their devices. In the US, the term 5G might be applied to fixed wireless deployments, with “5G to the home:. Given the confusion around what 5G is, this second “label whatever we have as 5G: approach could be used in Europe and the US alongside the first “limited millimeter wave deployment” approach happening in Asia Pacific.

This is the chilling observation: these planning documents for the recently installed 4G Palo Alto Small cells, show that even though the RF/MW radiation calculations were made for 6 watts of input power, the actual input power for each small cell antenna is 300 to 500 Watts and the “associated equipment” power supply cabinet is 17.5 cubic feet — one for each antenna!

Similar documents for a 2017 Verizon 4G Small Cell deployment in Weston, MA state that each antenna outputs 1,257 Watts of Effective Radiated Power (ERP), a significant RF/MW radiation exposure. These military-grade RF/MW radiation exposures do not belong in residential zones not matter what over-the-rainbow promises are being made about 5G by those who wish to profit for 4G/5G Distributed Antenna Systems (DAS) installations everywhere.

As SB.649 contains no language for real-time monitoring of the levels of RF/MW radiation and no warning or detection systems if the RF/MW radiation reaches hazardous levels, then the equipment is already in place to easily increase the power input from 6 watts to 500 watts, which constitutes a Microwave Radiation weapon on every block in downtown Palo Alto — not unlike weapons the Federal Government has already designed and deployed.

Here is an actual, already designed and deployed actual millimeter wave application . . .

Military Ray Gun: Active Denial System, an Electromagnetic Radiation Weapon:

The military exploits the fact that Radio-Frequency Microwave Radiation (RF/MW radiation) is biologicially active while the Wireless industry denies that RF/MW radiation is biologically active. Which one is lying?

  • 5G is a pipe-dream: it is not even close to being deployed at scale. The engineers understand this. 5G is the smoke screen/hype machine being used to sell this vision to the politicians.
  • This is actually a real-estate scheme: AT&T and Verizon want to invade neighborhoods and secure cut-rate, rent-controlled access to publicly-owned structures (utility poles, light poles, traffic lights, street signs) so they can expand their operations, at will, without regulation. AT&T and Verizon will install 4G/LTE today to make obscene profits now . . . and perhaps never upgrade it to 5G.

  • Have AT&T and Verizon behaved like this in the past? Sure. Read about their promises to upgrade the copper wireline networks with fiber-optic cables. After years of promises, and a fraudulent diversion of funds that they received to install the fiber-optic cables, they determined it would lower their costs and maximize their profits to use Wireless for broadband instead — even though Wireless broadband is hazardous, extremely energy-inefficient, much less secure and much less reliable than fiber-optic to the home.
  • The real goal is to secure a new source of revenue: cell phone subscriptions are a mature (but still very profitable business); now the Wireless companies want to steal market share from Comcast and other cable companies and charge for data by the gigabyte. How to best do this? Make timely donations to politicians’ local programs to get these politicians to do their bidding, like price-fixing the access price to publicly-owned property to install powerful two-way microwave transmitters 10-15 feet from second story windows in residential neighborhoods.”(1)

More bandwidth – more dangers of 5G

“Who doesn’t want faster, bigger (or smaller), more efficient? Take wireless mobile telecommunications. Our current broadband cellular network platform, 4G (or fourth generation), allows us to transmit data faster than 3G and everything that preceded. We can access information faster now than ever before in history. What more could we want? Oh, yes, transmission speeds powerful enough to accommodate the (rather horrifying) so-called  Internet of Things. Which brings us to 5G.

Until now, mobile broadband networks have been designed to meet the needs of people. But 5G has been created with machines’ needs in mind, offering low-latency, high-efficiency data transfer. It achieves this by breaking data down into smaller packages, allowing for faster transmission times. Whereas 4G has a fifty-millisecond delay, 5G data transfer will offer a mere one-millisecond delay–we humans won’t notice the difference, but it will permit machines to achieve near-seamless communication. Which in itself  may open a whole Pandora’s box of trouble for us – and our planet.

Let’s start with some basic background on 5G technology. Faster processing speeds require more bandwidth, yet our current frequency bandwidths are quickly becoming saturated. The idea behind 5G is to use untapped bandwidth of the extremely high-frequency millimeter wave (MMW), between 30GHz and 300GHz, in addition to some lower and mid-range frequencies.

High-frequency MMWs travel a short distance. Furthermore, they don’t travel well through buildings and tend to be absorbed by rain and plants, leading to signal interference. Thus, the necessary infrastructure would require many smaller, barely noticeable cell towers situated closer together, with more input and output ports than there are on the much larger, easier to see 4G towers. This would likely result in wireless antennas much closer together, on every lamp post and utility pole in your neighborhood.

Here are some numbers to put things into perspective: as of 2015, there were 308,000 wireless antennas on cell towers and buildings. That’s double the 2002 number. Yet 5G would require exponentially more, smaller ones, placed much closer together, with each emitting bursts of radio-frequency radiation (RFR)–granted, at levels much lower than that of today’s 4G cell towers–that will be much harder to avoid because these towers will be ubiquitous. If we could see the RFR, it would look like a smog that’s everywhere, all the time.”(2)

Serious health concerns:

“Definition of Microwave Sickness:

Original Merriam-Webster® link here.

a condition of impaired health . . . that is characterized by headaches, anxiety, sleep disturbances, fatigue, and difficulty in concentrating and by changes in the cardiovascular and central nervous systems and that is held to be caused by prolonged exposure to low-intensity microwave radiation”(1)

“It’s important to know that in 2011, the World Health Organization’s International Agency for Research on Cancer classified RFR as a potential 2B carcinogen and specified that the use of mobile phones could lead to specific forms of brain tumors.

Many studies have associated low-level RFR exposure with a litany of health effects, including:

  • DNA single and double-strand breaks (which leads to cancer)
  • oxidative damage (which leads to tissue deterioration and premature ageing)
  • disruption of cell metabolism
  • increased blood-brain barrier permeability
  • melatonin reduction (leading to insomnia and increasing cancer risks)
  • disruption of brain glucose metabolism
  • generation of stress proteins (leading to myriad diseases)”(2)

And then there’s the negative effects on mitochondria in the cells:

“As mentioned, the new 5G technology utilizes higher-frequency MMW bands, which give off the same dose of radiation as airport scanners.

The effects of this radiation on public health have yet to undergo the rigors of long-term testing. Adoption of 5G will mean more signals carrying more energy through the high-frequency spectrum, with more transmitters located closer to people’s homes and workplaces–basically a lot more (and more potent) RFR flying around us. It’s no wonder that apprehension exists over potential risks, to both human and environmental health.

Perhaps the strongest concern involves adverse effects of MMWs on human skin. This letter to the Federal Communications Commission, from Dr Yael Stein of Jerusalem’s Hebrew University, outlines the main points. Over ninety percent of microwave radiation is absorbed by the epidermis and dermis layers, so human skin basically acts as an absorbing sponge for microwave radiation. Disquieting as this may sound, it’s generally considered acceptable so long as the violating wavelengths are greater than the skin layer’s dimensions. But MMW’s violate this condition.

Furthermore, the sweat ducts in the skin’s upper layer act like helical antennas, which are specialized antennas constructed specifically to respond to electromagnetic fields. With millions of sweat ducts, and 5G’s increased RFR needs, it stands to reason that our bodies will become far more conductive to this radiation. The full ramifications of this fact are presently unclear, especially for more vulnerable members of the public (e.g., babies, pregnant women, the elderly), but this technology

Furthermore, MMWs may cause our pain receptors to flare up in recognition of the waves as damaging stimuli. Consider that the US Department of Defense already uses a crowd-dispersal method called the Active Denial System, in which MMWs are directed at crowds to make their skin feel like it’s burning, and also has the ability to basically microwave populations to death from afar with this technology if they choose to do so. And the telecommunications industry wants to fill our atmosphere with MMWs?

Animal research worldwide illustrates how microwave radiation in general and MMW’s in particular can damage the eyes and immune system, cell growth rate, even bacterial resistance. An experiment at the Medical Research Institute of Kanazawa Medical University showed that 60GHz millimeter-wave antennas produce thermal injuries in rabbit eyes, with thermal effects reaching below the eye’s surface. This study, meanwhile, suggests low-level MMW’s caused lens opacity–a precursor to cataracts–in rats’ eyes. A Chinese study demonstrated that eight hours’ of microwave radiation damaged rabbits’ lens epithelial cells. A Pakistani study concluded that exposure to mobile phone EMF prevented chicken embryo retinal cells from properly differentiating.

This Russian study revealed that exposing healthy mice to low-intensity, extremely high-frequency electromagnetic radiation severely compromised their immune systems. And a 2016 Armenian study concluded that low-intensity MMW’s not only depressed the growth of E. coli and other bacteria, but also changed certain properties and activity levels of the cells. The same Armenian study noted that MMW interaction with bacteria could lead to antibiotic resistance–distressing news, considering immunity to bacteria is already compromised due to the overuse of antibiotics.

Again, if these findings translate to humans, our rampant cellphone use would likely cause profound, adverse health effects; an increase in MMW’s as more bandwidth is introduced could further complicate the matter. But what’s also important to note here is that 5G technologies will not only have a profound impact on human health, but on the health of all living organisms it touches, including plants, as we shall see.

Equally disturbing, 5G technology puts environmental health at risk in a number of ways. First, MMWs may pose a serious threat to plant health. This 2010 study showed that the leaves of aspen seedlings exposed to RFR exhibited symptoms of necrosis, while another Armenian study suggested low-intensity MMW’s cause “peroxidase isoenzyme spectrum changes”–basically a stress response that damages cells–in wheat shoots. Plant irradiation is bad news for the planet’s flora, but it’s bad news for us, too: it could contaminate our food supply.

5G will potentially threaten natural ecosystems. According to several reports over the last two decades–some of which are summarized here–low-level, non-ionizing microwave radiation affects bird and bee health. It drives birds from their nests and causes plume deterioration, locomotion problems, reduced survivorship and death. And bee populations suffer from reduced egg-laying abilities of queen bees and smaller colony sizes. More evidence of ecosystem disruption comes from this 2012 meta-study, which indicates that 593 of 919 research studies suggest that RFR adversely affects plants, animals and humans.

It bears repeating: 5G is bad news for all living creatures and the planet we share.

Stop Killing Yourselves

The following video puts a bit more emphasis on the seriousness of this silent weapon:

Beware the propaganda deluge

Despite being fully aware of all these unsettling results, threats and concerns, the US corporatocracy continues to maintain a gung-ho attitude about 5G. The Mobile Now Act was passed in 2016, and many US states have since gone ahead with 5G plans. The telecom industry’s biggest players have basically co-opted government powers to enforce their 5G agenda, with companies like AT&T and Qualcomm having begun live testing. And despite research showing serious threats to humans and the planet, the FCC Chairman announced intentions to open low-, mid- and high-frequency spectrums, without even mentioning a single word about the dangers.

They’re going to sell this to us as ‘faster browsing speeds’ – but the truth is, you’ll barely even notice the difference. They’re going to call anyone who protests against 5G a ‘Luddite’ or ‘technophobe’. But why such a willingness to embrace another new technology   – even though it carries serious risks and brings spurious benefits? Why not heed the lessons learned from killer products like asbestos, tobacco and leaded gasoline?

Because a tiny percentage of people will gain an awful lot of money, is one reason. And because companies and governments will be given unprecedented amounts of power over civilians is the other.

All isn’t doom and gloom, though. At least one US politician is maintaining some level-headedness: in October, California Governor Jerry Brown stopped legislation that would have allowed the telecom industry to inundate the state with mini-towers. Brown’s bold actions have permitted localities a say in where and how many cell towers are placed.

The state of Hawaii has stopped 5G and smart meters by collectively threatening to charge every person who installed such meters with liability for any health problems residents may suffer. Moreover, 180 scientists have started a petition to warn of 5G potential health effects. Maybe these actions will afford more time for additional studies and data collection. Just as importantly, maybe they’ll cause other politicians and figureheads to reflect on what they’ve been pushing for.

Millimeter waves (from 10-mm|30GHz to 1-mm|300GHz) are readily absorbed by the atmosphere and by the eyes and skin of living organisms

In the first quarter of 2017, the US population was being irradiated primarily by the following pulsed microwaves:

  • 700 million to 2.1 billion microwaves per second for 2G/3G/4G mobile data sent to cell phones
  • 2.4 billion to 5.8 billion microwaves per second for Wi-Fi data to tablets/laptops

In the near future, if Verizon, AT&T and others wireless carriers have their way, the US population will be radiated with additional, pulsed microwaves (24 billion to 90 billion microwaves per second) for 5G services and for navigation-assisted cars.

As with any toxic agent, the proper way to evaluate its toxicity is to consider not just the rate of exposure (as the Federal RF/MW radiation guidelines do), but consider total exposure over time. Below is a graph of RF/MW radiation exposures from Wi-Fi of an elementary school student using a wireless iPad. One can see extremely high peaks of electrical power.  These peaks cannot be seen using SAR tests. 

The intense peaks of RF/MW radiation and total exposure over time (not the average rate of exposure), are what impacts health the most.

In May of 2016, scientists at the US Federal National Toxicology Program released “partial findings” from the $25 million study on cellphone radiation, that found that both hyperplasias (abnormal increases in volume of a tissue or organ caused by the formation and growth of new normal cells) and tumors occur at significantly higher rates in the presence of continuous RF/MW radiation,

Disregarding these findings, six short weeks later, the FCC approved a move to 5G, and the wireless industry got to work installing  Distributed Antenna Systems on utility poles as quickly as they possibly could.  Some antennas have been placed as close as 20 feet from second story bedroom windows and will spray 4G or 5G RF/MW radiation 24/7/365 into these bedrooms.. Cancer clusters have been documented for people living closer than 2,000 feet to mobile communications base stations,.Antennas for mobile communications base stations should never be lower than 200 feet, and never closer than 2,000 feet to people and other living organisms.

Massive electromagnetic pollution is spiraling out of control, with both Industry and Government denying the scientific proof of harm from RF/MW radiation. Our Government and the Wireless Industry should not transmit digital data wirelessly, using the data-dense modulation schemes: Orthogonal Frequency-Division Multiplexing (OFDM/OFDMA) used in Wi-Fi, 4G/LTE and 5G — because the US Government has already proven that the data-sparse 2G modulation is hazardous. We must, instead, transmit data from Point A to Point B to every business, every home, every school and farm with far superior fiber optic cables.”(3)

Most 5G Studies Misleading

“5G will use pulsed millimeter waves to carry information. But as Dr. Joel Moskowitz points out, most 5G studies are misleading because they do not pulse the waves. This is important because research on microwaves already tells us how pulsed waves have more profound biological effects on our body compared to non-pulsed waves. Previous studies, for instance, show how pulse rates of the frequencies led to gene toxicity and DNA strand breaks.

Live Testing Already Begun

AT&T have announced the availability of their 5G Evolution in Austin, Texas. 5G Evolution allows Samsung S8 and S8 + users access to faster speeds. This is part of AT&T’s plan to lay the 5G foundation while the standards are being finalized. This is expected to happen in late 2018. AT&T has eyes on 19 other metropolitan areas such as Chicago, Los Angeles, Boston, Atlanta, San Francisco and so on. Indianapolis is up next on their 5G trail due to arrive in the summer.

Charter, the second-largest cable operator in the US, has been approved for an experimental 28 GHz license in Los Angeles. The outdoor tests will use fixed transmitters with a 1 km or smaller effective radius.

Qualcomm has already demonstrated a 5G antenna system with about 27 decibel gain. According to ABI Research, is “about 10 to 12 more db than a typical cellular base station antenna.” Not a good sign.

Many more private sector companies such as HTC, Oracle, Sprint, T-Mobile are playing a role in the developing of testing platforms by contributing time, knowledge or money.

In the UK the 3.4GHz band has been earmarked for 5G use with contracts awarded to O2, Vodaphone, EE and Three. While the 2.3GHz band, awarded to O2, is likely to be used for 5G too in time.”(4)

Take a look at this short video on some of the negative health effects of cell phones and 5G:

Take action

“In the meantime, we as individuals must do everything we can to protect ourselves. Here’s what you can do:

  • Understand EMFs and their behaviors
  • Use EMF meters to measure, mark and avoid hotspots
  • Whenever possible, limit your exposure: use a headset or speaker mode while talking on a cellphone.
  • Refuse to use 5G phones and devices. Full stop. And discourage those you know from doing so.
  • Refuse to buy anything ‘smart’ – ‘smart’ appliances, ‘smart’ heaters, etc.
  • No matter what, do NOT get a smart meter – these put high levels of 5G radiation right in your home
  • Join the growing numbers of dissenters. Get active with them here.
  • Do as the Hawaiians have done and threaten smart meter and 5G tech installers with liability. You can learn how to do that here.
  • Spread the word! Share this article with everyone you know”(2) 

Some Final Thoughts:

Cites:

(1) What are 4G/5G?

(2) Frightening Frequencies: The Dangers of 5G & What You Can Do About Them

(3) Microwave Radiation Primer

(4) 5G Radiation Dangers – 11 Reasons To Be Concerned

Related Posts:

Silent Weapons For Quiet Wars

Chemtrail (Aerosol) Spraying

Contrails vs. Chemtrails (Aerosols)

Dirty Electricity and the Diseases of Civilization

Electromagnetic Field Interactions With Biological Systems

EMF Exposure and Potential Health Effects

EMF-Impact on Health

How To Search For Cell Tower And Antenna Locations

Low Intensity Radiofrequency Radiation: A New Oxidant For Living Cells

The Depopulation Agenda

The History And Dangers Of Microwave Technology

The Microwave Control Grid

Vaccines

What Chemtrails Are Doing To Your Brain

 

 

Meet David Hogg – Crisis Actor & Friends

Parkland School Shooting Personality David Hogg Can’t Remember His Lines:

And there’s more:

So ……..Why would these folks be used in this supposed real event?

“This particular event stands out due to all the laughing and smiling by the main students being promoted as angry anti-gun advocates by the media.

It is notable that the mainstream media has quickly embraced a few specific personalities in regards to the Parkland school shooting event:

“Keep in mind that this is the same media that openly promotes wars overseas and even support for cloaked terrorist groups such as the White Helmets in Syria. Thus, they should always be watched with a skeptical eye as they often misreport the truth and rely on triggering an emotional response from their audience.

Information posted online by the students at Parkland and by other individuals reveals some of these children to be involved with the crafts of acting, reporting and directing movies:”(1)

 

 

 

 

Link to

 

 

 

 

 

Facebook page of Jeff Kasky, father of student Cameron Kasky who has been featured in numerous media appearances since the event, shows that his father posted the image below on December 10, 2017. Link:  https://www.facebook.com/jeffkasky

Link to additional Troupe 4879- Mobilizing MSD (Marjory Stoneman Douglas High School) account (established Saturday, February 17, 2018):

https://www.facebook.com/groups/1672529512839350/

Account is administrated by Jessica Goodin:

Link to Jessica Goodin Facebook account:

https://www.facebook.com/jessica.goodin.52?fref=gm&dti=1672529512839350&hc_location=group

Screen capture of front page of account shows that Jessica Goodin is a special effects Make up artist at Universal Studios Florida and Faces By Jessica, as well as studying special effects make up at the Cosmix School of Makeup Artistry:

Keep in mind that Jessica, who created the special effects above according to her own Facebook posts is the administrator for Troupe 4879 that Cameron Kasky works for as an award-winning director according to his own father’s Facebook post on December 10, 2017, as shown above.

The following image from Stoneman Douglas Drama Club confirms the organizations are one and the same:

The next student actor now doing media promotion for the Parkland shooting event is Alex Wind. He has appeared across numerous mainstream media stations in the wake of the event:

Two more examples (along with the one up above) of him on MSM outlets:

 

This is the link to his Facebook page:

https://www.facebook.com/alex.wind.1447/friends?lst=1509910576%3A100002387891632%3A1519270919&source_ref=pb_friends_tl

Images from his Facebook page reveal him to also be a student actor and a close friend of Kasky:

Onto the next student chosen by the media to discuss the Parkland shooting and antigun issues – Delaney Tarr:

Link to Facebook account of Delaney Tarr (image above), the 4th student personality being showcased by the mainstream media in the wake of the Parkland school shooting event reveals that she also has previous experience acting in front of the camera:

https://www.facebook.com/photo.php?fbid=1538279549832343&set=pb.100009509379163.-2207520000.1519430434.&type=3&theater

Other student witness statements that include anomalies related to the February 14, 2018 school shooting. Also included is a clip of CNN’s crying FBI anti-terrorism agent.

Multiple Shooters Participated in Florida School Shooting According to Eyewitness Alexa Miednik:

Second student witness says up to three shooters in Parkland Florida school mass shooting:

CIA Terrorism Expert ‘Cries’ on CNN During Wolf Blitzer Interview About Florida School Shooting:

A poor attempt by this actor:

“Many students and faculty who were at Majority Stoneman Douglas High School in Parkland, Florida, during the Valentine’s Day shooting thought it was just a drill after they were told in advance that role-players would be conducting a fake ‘code red’ which is an active shooter scenario.”(2)

‘Lift The Veil’ channel highlights the media now covering the active shooter drill that was taking place at the school. This looks a lot like damage control:

“Given they already had the make-up and fake wound materials at the school, how hard would it be to FAKE a “real” shooting for gun-control propaganda purposes?

But then folks ask, what about the ACTUAL dead kids?  Well, in the ABC NEWS video below, we find that TV Networks are already placing ADULTS AS FAKE STUDENTS in schools with FAKE IDENTITIES to find out what’s really going on inside schools, and to “address issues” like Bullying, sexuality and other things.   

IF THEY HAVE ALREADY HAVE FAKE STUDENTS WITH FAKE IDENTITIES , FAKING THEIR DEATHS IS EASY!!

WHO BENEFITS?

The 30 second video below shows former Attorney General Eric Holder openly telling schools to “brainwash” people about guns:

“Then there’s the “kids” from Parkland High School in Florida being on TV everywhere with their allegedly “grass roots effort” at Gun Control.  Turns out, though, their “grass roots effort” is actually astroturf!

“Can you believe these kids?” It’s been a recurring theme of the coverage of the Parkland school shooting: the remarkable effectiveness of the high school students who created a gun control organization in the wake of the massacre. In seemingly no time, the magical kids had organized events ranging from a national march to a mass school walkout, and they’d brought in a million dollars in donations from Oprah Winfrey and George Clooney.

The Miami Herald credited their success to the school’s stellar debate program. The Wall Street Journal said it was because they were born online, and organizing was instinctive. It wasn’t. 

On February 28, BuzzFeed came out with the actual story: Rep. Debbie Wassermann Schultz aiding in the lobbying in Tallahassee, a teacher’s union organizing the buses that got the kids there, Michael Bloomberg’s groups and the Women’s March working on the upcoming March For Our Lives, MoveOn.org doing social media promotion and (potentially) march logistics, and training for student activists provided by federally funded Planned Parenthood.

The president of the American Federation of Teachers told BuzzFeed they’re also behind the national school walkout, which journalists had previously assured the public was the sole work of a teenager. (I’d thought teachers were supposed to get kids into school, but maybe that’s just me.)

In other words, the response was professionalized – propagandized. That’s not surprising, because this is what organization that gets results actually looks like. It’s not a bunch of magical kids in somebody’s living room. Nor is it surprising that the professionalization happened right off the bat. Broward County’s teacher’s union is militant, and Rep. Ted Lieu stated on Twitter that his family knows Parkland student activist David Hogg’s family, so there were plenty of opportunities for grown-ups with resources and skills to connect the kids.”(2)

“That’s before you get to whether any of them had been involved in the Women’s March. According to BuzzFeed, Wassermann Schultz was running on day two.

What’s striking about all this isn’t the organization. If you start reading books about organizing, it’s clear how it all works. But no journalist covering the story wrote about this stuff for two weeks. Instead, every story was about the Parkland kids being magically effective.

On Twitter, the number of bluechecks rhapsodizing over how effective the kids’ organizational instincts were. But organizing isn’t instinctive. It’s skilled work; you have to learn how to do it, and it takes really a lot of people. You don’t just get a few magical kids who’re amazing and naturally good at it.

The real tip-off should have been the $500,000 donations from Oprah Winfrey and George Clooney. Big celebrities don’t give huge money to strangers on a whim. Somebody who knows Winfrey and Clooney called them and asked. But the press’s response was to be ever more impressed with the kids. 

For two weeks, journalists abjectly failed in their jobs, which is to tell the public what’s going on. And any of them who had any familiarity with organizing campaigns absolutely knew. Matt Pearce, of the Los Angeles Times, would have been ideally placed to write an excellent article: not only is he an organizer for the Times’s union, he moderated a panel on leftist activism for the LA Times Book Festival and has the appropriate connections in organizing. Instead, he wrote about a school walkout, not what was behind it. (In another article, Pearce defined Delta caving to a pressure campaign’s demands as “finding middle ground.”)

But it’s not just a mainstream media problem. None of the righty outlets writing about Parkland picked up on the clear evidence that professional organizers were backing the Parkland kids, either. Instead, they objected to the front-and-centering of minor kids as unseemly, which does no good: Lefties aren’t going to listen, and it doesn’t educate the Right to counter.

The closest anyone got was Elizabeth Harrington at the Washington Free Beacon, who noted that Clooney’s publicist was booking the kids’ media interviews pro bono, and said that a friend (not Clooney) had asked him to do it. The result of all this is that the average righty does not understand what’s going on in activism, because all they see is what the press covers. The stuff that’s visible. It’s like expecting people in the Stone Age to grok the Roman army by looking at it. 

Which brings us to Who Benefits from this “shooting?”

Notice how guns have stolen the attention from the absolute provable crimes of Clinton and Obama rigging and and trying to steal the election. 

They’re so many crimes that they both committed that literally you could just pick any crime. 

Richard Nixon only dreamed of doing what Obama actually did: Weaponizing the IRS against political opponents. Weaponizing the intelligence against political opponents. 

These criminals are on the run and using every tactic they can to keep their crimes out of the mainstream news which is controlled by [………..you guessed it…..the oligarchs and their cultural Marxists minions.]

So was the “mass-shooting” in Florida real . . . or just one big left-wing propaganda push against guns to divert attention from Clinton and Obama crimes?”(2)

“So there you have it, they just happened to be running a bunch of active shooter drills that day and wearing lots of gory makeup and a crazy kid who looks comatose just happened to show up at the school and shoot a bunch of people for real and then they just happened to fortunately be a bunch of student actors that had well rehearsed lines about how angry they are about guns, except when they were caught smiling for the photo shoots, of course. And then they just luckily became media darlings, you know, that same US media that hate guns for American citizens, but never met a war it didn’t like :), or a child to exploit and promote their nefarious agenda.

So now these acting students turned political activists for “March for Our Lives” are off on the talk show and nationwide rally circuit backed by Hollywood’s millionaires such as George Clooney (White Helmets promoter – the White Helmets are terrorists posing as rescuers by the way), Steven Spielberg, Oprah Winfrey and others Hollywood elitists…..all to help rid America of its guns and give them up to the government……..a government led by a man that the so called ‘left’ refers to as a modern day Hitler. Hmmmm…..somethings not quite adding up here. Oh well, maybe their next performance will be more realistic.”(1)

Cites:

(1) Extensive Post Reveals Drills, Anomalies and Child Actors Involved With Parkland School Shooting in Florida on February 14, 2018 (N.S.F.W.)

(2) VIDEO: FAKE BLOOD, FAKE WOUNDS for “Active-Shooter DRILL” at same High School in Florida, hours BEFORE “actual” shootings which “killed 17”

Related Posts:

Dallas Police Shooting Hoax

Gun Control

Las Vegas Mass Shooting – A Gun Control Operation

Las Vegas Mass Shooting: False Flag PsyOp

Munich Shooting Hoax

Nobody Died at Sandy Hook

The Orlando Pulse Shooting Hoax

What Are Major Telltale Signs Of Government Contrived Events (Aka Hoaxes)?

Artificial Intelligence is Hype

Praise for AI is creeping into more and more Ted talks, YouTube videos and news feeds as each week passes.

The question is, or should be, how much of the hype surrounding artificial intelligence is warranted?

“For 60 years scientists have been announcing that the great AI breakthrough is just around the corner.  All of a sudden many tech journalists and tech business leaders appear convinced that, finally, AI has come into its own.”(1)

“We’ve all been seeing hype and excitement around artificial intelligence, big data, machine learning and deep learning. There’s also a lot of confusion about what they really mean and what’s actually possible today. These terms are used arbitrarily and sometimes interchangeably, which further perpetuates confusion.

So, let’s break down these terms and offer some perspective.

Artificial Intelligence

Artificial Intelligence is a branch of computer science that deals with algorithms inspired by various facets of natural intelligence. It includes performing tasks that normally require human intelligence, such as visual perception, speech recognition, problem solving and language translation. Artificial intelligence can be seen in many every day products, from intelligent personal assistants in your smartphone to the X-box 360 Kinect camera, allowing you to interact with games through body movement. There are also well-known examples of AI that are more experimental, from the self-aware Super Mario to the widely discussed driverless car. Other less commonly discussed examples include the ability to sift through millions of images to pull together notable insights.

Big Data

Big Data is an important part of AI and is defined as extremely large data sets that are so large they cannot be analyzed, searched or interpreted using traditional data processing methods. As a result, they have to be analyzed computationally to reveal patterns, trends, and associations. This computational analysis, for instance, has helped businesses improve customer experience and their bottom line by better understand human behavior and interactions. There are many retailers that now rely heavily on Big Data to help adjust pricing in near-real time for millions of items, based on demand and inventory. However, processing of Big Data to make predictions or decisions like this often requires the use of Machine Learning techniques.

Machine Learning

Machine Learning is a form of artificial intelligence which involves algorithms that can learn from data. Such algorithms operate by building a model based on inputs and using that information to make predictions or decisions, rather than following only explicitly programmed instructions. There are lots of basic decisions that can be performed leveraging machine learning, like Nest with its learning thermostats as one example. Machine Learning is widely used in spam detection, credit card fraud detection, and product recommendation systems, such as with Netflix or Amazon.

Deep Learning

Deep Learning is a class of machine learning techniques that operate by constructing numerous layers of abstraction to help map inputs to classifications more accurately. The abstractions made by Deep Learning methods are often observed as being human like, and the big breakthrough in this field in recent years has been the scale of abstraction that can now be achieved. This, in recent years, has resulted in breakthroughs in computer vision and speech recognition accuracy. Deep Learning is inspired by a simplified model of the way Neural Networks are thought to operate in the brain.

No doubt AI is in a hype cycle these days. Recent breakthroughs in Distributed AI and Deep Learning, paired with the ever-increasing need for deriving value from huge stashes of data being collected in every industry, have helped renew interest in AI. “(5)

Human levels of understanding? Really?

How much of an AI breakthrough has humanity actually achieved, as opposed to wishful thinking.

“Gary Marcus, a psychology professor at New York University, who writes about artificial intelligence for the New Yorker, was the first to burst the balloon. He told Geektime that while the coalescence of parallel computation and big data has led to some exciting results, so-called ‘deeper algorithms’ aren’t really that much different from two decades ago.

In fact, several experts concurred that doing neat things with statistics and big data (which account for many of the recent AI “breakthroughs”) are no substitute for understanding how the human brain actually works.

“Current models of intelligence are still extremely far away from anything resembling human intelligence,” philosopher and scientist Douglas Hofstadter told Geektime.

But why is everyone so excited about computer systems like IBM’s Watson, which beat the best human players on Jeopardy! and has even more recently been  diagnosing disease?

“Watson doesn’t understand anything at all,” said Hofstadter.  “It is just good at grammatical parsing and then searching for text strings in a very large database.”

Similarly, Google Translate understands nothing whatsoever of the sentences that it converts from one language to another, “which is why it often makes horrendous messes of them,” said Hofstadter.”(1)

“In narrow domains like chess, computers are getting exponentially better.

But in some other domains, like strong artificial intelligence, general artificial intelligence, there’s  been almost no progress.

Not many people are fooled into thinking that Siri is an example of general artificial intelligence.

We were promised Rosie the robot and got Roomba, which wanders the room and tries not to bump into anything.

AI actually nearly died in 1973. There was something in Britain, called the Lighthill Report, that was compiled by James Lighthill for the British Science Research Council as an evaluation of the academic research in the field of artificial intelligence.

The report said that artificial intelligence only worked in narrow domains, it’s unlikely to scale up, and that it will have limited applications. This report led basically to the end of funding in British AI research. This was called the first AI winter.

Current systems are still narrow. You have chess computers that can’t do anything else, driverless cars that can’t do anything else. There are language translators that are really good at translating languages, but are not perfect, often having a problem with syntax, and they can’t actually answer questions about what they’re translating.

What you end up having in AI is a community of idiot savants, with special service programs that do one thing, but aren’t general.

Watson is probably the most impressive in some ways, but as with most artificial intelligence systems that actually work, there’s a hidden restriction that makes it look a lot easier than it appears to be. When you look at Watson, you think, it knows everything, it can look it up really quickly, but it turns out that 95% of all of the Jeopardy questions that it’s trying to answer are the titles of Wikipedia pages. It’s basically searching the Wikipedia pages, but seems like a general intelligence, but it isn’t one.

IBM is still struggling to figure out what to do with it.

Your average teenager can pick up a new video game after an hour or two of practice or learn plenty of other skills.

The closest we have to that in AI is the company Deep Mind, which Google bought in 2014. It’s a system that can do general purpose learning of a limited sort. Its actually better than humans in a few video games and in the game.

We’re still a long way from machines that can master a wide range of tasks, understand something like Wikipedia and be able to learn for itself.

We were promised human level intelligence and what we got were things like ‘key word’ searches. Anyone who’s done searches on Google has run into the limitations of this level of processing.

The trouble with Big Data is that it’s all correlation and no causation:

You can always go and find correlations, but just finding correlations, which is what Big Data does, if it’s done in an unsophisticated way, doesn’t necessarily give you the right answer.

It’s important to realize that children don’t just care about correlations. They want to know why things are correlated:

Children are asking questions. Big Data is just collecting data.

AI’s roots were in trying to understand human intelligence. Hardly anybody talks about human intelligence anymore. They just talk about getting a big data base and run a particular function on it.”(2)

“In Marcus’ view, the only route to true machine intelligence is to begin with a better understanding of human intelligence. Big data will only get you so far, because it’s all correlation and no causation.

When children begin learning about the world, they don’t need big data. That’s because their brains are understanding why one thing causes another. That process only requires “small data,” says Marcus. “They don’t need exabytes of data to learn a lot.”

“My 22-month old is already more sophisticated than the best robots in the world in digging through bad toys and finding something new.”

Marcus offers several examples of aspects of human intelligence that we need to gain a better understanding of if we are want to build intelligent machines. For instance, a human being who looks at the following picture will be able to guess what happens next:

No machine can.”(1)

Below is an image of a Goose on a lake and there’s a detail there that looks like a car:

“It you take a Deep Learning algorithm and have it look at a picture like this, it might do what’s called a false alarm. It might say it sees a duck and it sees a car there too. You, as a human being, know that there’s not a car in the lake. So, if you have common sense, you use that a part of your analysis of the image and you don’t usually get fooled.

Try doing a search on the following: ‘Which is closer Paris or Saturn’, and see what you get for search results. Any child should be able to answer that question, but for most search engines, you will just get various links to info about Saturn and some links to info about Paris.

Natural Language

There’s a kind of sentence called a generic, which is things like ‘triangles have three sides’. What is meant by generic here is the triangles have three sides in general.

But is can be looser than that. For example one can say ‘dogs have four legs’. Most dogs have four legs, but they don’t all, since most people have seen three legged dogs.

The point here is that you can make sense of that statement. You can read an encyclopedia and make inferences about it.

How about ‘ducks lay eggs’? Well, this isn’t even true of most ducks since half the ducks are male and they don’t lay eggs, and some of the ducks are too young or too old or have a disorder, and they don’t lay eggs. So maybe only 30% of ducks actually lay eggs, but you understand it, you get it, you can think about it. We can make inferences, even though we don’t have a statistically reliable truth.

Children are able to understand this but machines aren’t.

The field of AI is hyped up to be further along than it actually is. There’s been little progress on making genuinely smart machines. Statistics and Big Data, as popular as they are, are never going to get us all the way there by themselves.

The only route to true machine intelligence is going to begin with a better understanding of human intelligence.”(2)

The Ideology of AI

“If computers don’t actually even think in the human sense, then why do the media and high-tech business leaders seem so eager to jump the gun? Why would they have us believe that robots are about to surpass us?

Perhaps many of us actually want computers to be smarter than humans because it’s an appealing fantasy. If robots are at parity with humans, then we can define down what it means to be human — we’re just an obsolete computer program — and all the vexing, painful questions like why do we suffer, why do we die, how should we live? become irrelevant.

It also justifies a world in which we put algorithms on a pedestal and believe they will solve all our problems. Jaron Lanier compares it to a religion:

“In the history of organized religion,” he told Edge.org, “it’s often been the case that people have been disempowered precisely to serve what were perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity.”

“That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else…contributing mostly to the fortunes of whoever runs the top computers. The new elite might say, “Well, but they’re helping the AI, it’s not us, they’re helping the AI. The effect of the new religious idea of AI is a lot like the economic effect of the old idea, religion.”(1)

The Mythic Singularity

“Why is religious language so pervasive in AI and transhumanist circles?

The odd thing about the anti-clericalism in the AI community is that religious language runs wild in its ranks, and in how the media reports on it. There are AI ‘oracles’ and technology ‘evangelists’ of a future that’s yet to come, plus plenty of loose talk about angels, gods and the apocalypse.

Ray Kurzweil, an executive at Google, is regularly anointed a ‘prophet’ by the media – sometimes as a prophet of a coming wave of ‘superintelligence’ (a sapience surpassing any human’s capability); sometimes as a ‘prophet of doom’ (thanks to his pronouncements about the dire prospects for humanity); and often as a soothsayer of the ‘singularity’ (when humans will merge with machines, and as a consequence live forever).

The tech folk who also invoke these metaphors and tropes operate in overtly and almost exclusively secular spaces, where rationality is routinely pitched against religion. But believers in a ‘transhuman’ future – in which AI will allow us to transcend the human condition once and for all – draw constantly on prophetic and end-of-days narratives to understand what they’re striving for.

From its inception, the technological singularity has represented a mix of otherworldly hopes and fears. The modern concept has its origin in 1965, when Gordon Moore, later the co-founder of Intel, observed that the number of transistors you could fit on a microchip was doubling roughly every 12 months or so. This became known as Moore’s Law: the prediction that computing power would grow exponentially until at least the early 2020s, when transistors would become so small that quantum interference is likely to become an issue.

‘Singularitarians’ have picked up this thinking and run with it. In Speculations Concerning the First Ultraintelligent Machine (1965), the British mathematician and cryptologist I J Good offered this influential description of humanity’s technological inflection point:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion’, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

These meditations are shot through with excitement but also the very old anxiety about humans’ impending obsolescence. Kurzweil has said that Moore’s Law expresses a universal ‘Law of Accelerating Returns’ as nature moves towards greater and greater order. He predicts that computers will first reach the level of human intelligence, before rapidly surpassing it in a recursive, self-improving spiral.

When the singularity is conceived as an entity or being, the questions circle around what it would mean to communicate with a non-human creature that is omniscient, omnipotent, possibly even omnibenevolent. This is a problem that religious believers have struggled with for centuries, as they quested towards the mind of God.

In the 13th century, Thomas Aquinas argued for the importance of a passionate search for a relationship and shaped it into a Christian prayer: ‘Grant me, O Lord my God, a mind to know you, a heart to seek you, wisdom to find you …’ Now, in online forums, rationalist ‘singularitarians’ debate what such a being would want and how it would go about getting it, sometimes driving themselves into a state of existential distress at the answers they find.

A god-like being of infinite knowing (the singularity); an escape of the flesh and this limited world (uploading our minds); a moment of transfiguration or ‘end of days’ (the singularity as a moment of rapture); prophets (even if they work for Google); demons and hell (even if it’s an eternal computer simulation of suffering), and evangelists who wear smart suits (just like the religious ones do). Consciously and unconsciously, religious ideas are at work in the narratives of those discussing, planning, and hoping for a future shaped by AI.

The stories and forms that religion takes are still driving the aspirations we have for AI. What lies behind this strange confluence of narratives? The likeliest explanation is that when we try to describe the ineffable – the singularity, the future itself – even the most secular among us are forced to reach for a familiar metaphysical lexicon. When trying to think about interacting with another intelligence, when summoning that intelligence, and when trying to imagine the future that such an intelligence might foreshadow, we fall back on old cultural habits. The prospect creating an AI invites us to ask about the purpose and meaning of being human: what a human is for in a world where we are not the only workers, not the only thinkers, not the only conscious agents shaping our destiny.”(4)

Superior Intelligence and Rogue AI

“In Aristotle’s book, The Politics, he explains: ‘[T]hat some should rule and others be ruled is a thing not only necessary, but expedient; from the hour of their birth, some are marked out for subjection, others for rule.’ What marks the ruler is their possession of ‘the rational element’. Educated men have this the most, and should therefore naturally rule over women – and also those men ‘whose business is to use their body’ and who therefore ‘are by nature slaves’. Lower down the ladder still are non-human animals, who are so witless as to be ‘better off when they are ruled by man’.

So at the dawn of Western philosophy, we have intelligence identified with the European, educated, male human. It becomes an argument for his right to dominate women, the lower classes, uncivilized peoples and non-human animals.

Needless to say, more than 2,000 years later, the train of thought that these men set in motion has yet to be derailed.

The late Australian philosopher and conservationist Val Plumwood has argued that the giants of Greek philosophy set up a series of linked dualisms that continue to inform our thought. Opposing categories such as intelligent/stupid, rational/emotional and mind/body are linked, implicitly or explicitly, to others such as male/female, civilised/primitive, and human/animal. These dualisms aren’t value-neutral, but fall within a broader dualism, as Aristotle makes clear: that of dominant/subordinate or master/slave. Together, they make relationships of domination, such as patriarchy or slavery, appear to be part of the natural order of things.

According to Kant, the reasoning being – today, we’d say the intelligent being – has infinite worth or dignity, whereas the unreasoning or unintelligent one has none. His arguments are more sophisticated, but essentially he arrives at the same conclusion as Aristotle: there are natural masters and natural slaves, and intelligence is what distinguishes them.

This line of thinking was extended to become a core part of the logic of colonialism. The argument ran like this: non-white peoples were less intelligent; they were therefore unqualified to rule over themselves and their lands. It was therefore perfectly legitimate – even a duty, ‘the white man’s burden’ – to destroy their cultures and take their territory. In addition, because intelligence defined humanity, by virtue of being less intelligent, these peoples were less human. They therefore did not enjoy full moral standing – and so it was perfectly fine to kill or enslave them.

So when we reflect upon how the idea of intelligence has been used to justify privilege and domination throughout more than 2,000 years of history, is it any wonder that the imminent prospect of super-smart robots fills us with dread?

From 2001: A Space Odyssey to the Terminator films, writers have fantasized about machines rising up against us. Now we can see why. If we’re used to believing that the top spots in society should go to the brainiest, then of course we should expect to be made redundant by bigger-brained robots and sent to the bottom of the heap. If we’ve absorbed the idea that the more intelligent can colonize the less intelligent as of right, then it’s natural that we’d fear enslavement by our super-smart creations. If we justify our own positions of power and prosperity by virtue of our intellect, it’s understandable that we see superior AI as an existential threat.

Natural stupidity, rather than artificial intelligence, remains the greatest risk.

This narrative of privilege might explain why, as the New York-based scholar and technologist Kate Crawford has noted, the fear of rogue AI seems predominant among Western white men. Other groups have endured a long history of domination by self-appointed superiors, and are still fighting against real oppressors. White men, on the other hand, are used to being at the top of the pecking order. They have most to lose if new entities arrive that excel in exactly those areas that have been used to justify male superiority.

I don’t mean to suggest that all our anxiety about rogue AI is unfounded. There are real risks associated with the use of advanced AI (as well as immense potential benefits). But being oppressed by robots in the way that, say, Australia’s indigenous people have been oppressed by European colonists is not number one on the list.

We would do better to worry about what humans might do with AI, rather than what it might do by itself. We humans are far more likely to deploy intelligent systems against each other, or to become over-reliant on them. As in the fable of the sorcerer’s apprentice, if AIs do cause harm, it’s more likely to be because we give them well-meaning but ill-thought-through goals – not because they wish to conquer us. Natural stupidity, rather than artificial intelligence, remains the greatest risk.”(3)

Consumers Don’t Want It

“2016 and 2017 saw “AI” being deployed on consumers experimentally, tentatively, and the signs are already there for anyone who cares to see. It hasn’t been a great success.

The most hyped manifestation of better language processing is chatbots. Chatbots are the new UX, many including Microsoft and Facebook hope. Oren Etzoni at Paul Allen’s Institute predicts it will become a “trillion dollar industry” But he also admits “my 4 YO is far smarter than any AI program I ever met”.

Hmmm, thanks Oren. So what you’re saying is that we must now get used to chatting with someone dumber than a four year old, just because they can make software act dumber than a four year old.

Put it this way. How many times have you rung a call center recently and wished that you’d spoken to someone even more thick, or rendered by processes even more incapable of resolving the dispute, than the minimum-wage offshore staffer who you actually spoke with? When the chatbots come, as you close the [X] on another fantastically unproductive hour wasted, will you cheerfully console yourself with the thought: “That was terrible, but least MegaCorp will make higher margins this year! They’re at the cutting edge of AI!”?

In a healthy and competitive services marketplace, bad service means lost business. The early adopters of AI chatbots will discover this the hard way. There may be no later adopters once the early adopters have become internet memes for terrible service.

The other area where apparently impressive feats of “AI” were unleashed upon the public were subtle. Unbidden, unwanted AI “help” is starting to pop out at us. Google scans your personal photos and later, if you have an Android phone, will pop up “helpful” reminders of where you have been. People almost universally find this creepy. We could call this a “Clippy The Paperclip” problem, after the intrusive Office Assistant that only wanted to help. Clippy is going to haunt AI in 2017. This is actually going to be worse than anybody inside the AI cult quite realizes.

The successful web services today so far are based on an economic exchange. The internet giants slurp your data, and give you free stuff. We haven’t thought more closely about what this data is worth. For the consumer, however, these unsought AI intrusions merely draw our attention to how intrusive the data slurp really is. It could wreck everything. Has nobody thought of that?

AI Is a Make Believe World Populated By Mad People

The AI hype so far has relied on a collusion between two groups of people: a supply side and a demand side. The technology industry, the forecasting industry and researchers provide a limitless supply of post-human hype.

The demand comes from the media and political classes, now unable or unwilling to engage in politics with the masses, to indulge in wild fantasies about humans being replaced by robots. The latter reflects a displacement activity: the professions are already surrendering autonomy in their work to technocratic managerialism. They’ve made robots out of themselves – and now fear being replaced by robots.

There’s a cultural gulf between AI’s promoters and the public that Asperger’s alone can’t explain. There’s no polite way to express this, but AI belongs to California’s inglorious tradition of generating cults, and incubating cult-like thinking. Most people can name a few from the hippy or post-hippy years – EST, or the Family, or the Symbionese Liberation Army – but actually, Californians have been it at it longer than anyone realizes.

There’s nothing at all weird about Mark.

Move along and please tip the Chatbot.

Today, that spirit lives on Silicon Valley, where creepy billionaire nerds like Mark Zuckerberg and Elon Musk can fulfil their desires to “play God and be amazed by magic”, the two big things they miss from childhood. Look at Zuckerberg’s house, for example. What these people want is not what you or I want. I’d be wary of them running an after school club.”(6)

Should We Be Afraid of AI?

“Suppose you enter a dark room in an unknown building. You might panic about monsters that could be lurking in the dark. Or you could just turn on the light, to avoid bumping into furniture. The dark room is the future of artificial intelligence (AI). Unfortunately, many people believe that, as we step into the room, we might run into some evil, ultra-intelligent machines. This is an old fear. It dates to the 1960s, when Irving John Good, a British mathematician who worked as a cryptologist at Bletchley Park with Alan Turing, made the following observation:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion’, and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.

Once ultraintelligent machines become a reality, they might not be docile at all but behave like Terminator: enslave humanity as a sub-species, ignore its rights, and pursue their own ends, regardless of the effects on human lives.

If this sounds incredible, you might wish to reconsider. Fast-forward half a century to now, and the amazing developments in our digital technologies have led many people to believe that Good’s ‘intelligence explosion’ is a serious risk, and the end of our species might be near, if we’re not careful. This is Stephen Hawking in 2014:

The development of full artificial intelligence could spell the end of the human race.

Last year, Bill Gates was of the same view:

I am in the camp that is concerned about superintelligence. First the machines will do a lot of jobs for us and not be superintelligent. That should be positive if we manage it well. A few decades after that, though, the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this, and don’t understand why some people are not concerned.

And what had Musk, Tesla’s CEO, said?

We should be very careful about artificial intelligence. If I were to guess what our biggest existential threat is, it’s probably that… Increasingly, scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence, we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like, yeah, he’s sure he can control the demon. Didn’t work out.

The reality is more trivial. This March, Microsoft introduced Tay – an AI-based chat robot – to Twitter. They had to remove it only 16 hours later. It was supposed to become increasingly smarter as it interacted with humans. Instead, it quickly became an evil Hitler-loving, Holocaust-denying, incestual-sex-promoting, ‘Bush did 9/11’-proclaiming chatterbox. Why? Because it worked no better than kitchen paper, absorbing and being shaped by the nasty messages sent to it. Microsoft apologised.

This is the state of AI today. After so much talking about the risks of ultraintelligent machines, it is time to turn on the light, stop worrying about sci-fi scenarios, and start focusing on AI’s actual challenges, in order to avoid making painful and costly mistakes in the design and use of our smart technologies.

The current debate about AI is a dichotomy between those who believe in true AI and those who do not. Yes, the real thing, not Siri in your iPhone, Roomba in your living room, or Nest in your kitchen. Think instead of the false Maria in Metropolis (1927); Hal 9000 in 2001: A Space Odyssey (1968), on which Good was one of the consultants; C3PO in Star Wars (1977); Rachael in Blade Runner (1982); Data in Star Trek: The Next Generation (1987); Agent Smith in The Matrix (1999) or the disembodied Samantha in Her (2013). You’ve got the picture. Believers in true AI and in Good’s ‘intelligence explosion’ belong to the Church of Singularitarians. For lack of a better term, disbelievers will be referred to as members of the Church of AItheists. Let’s have a look at both faiths and see why both are mistaken. And meanwhile, remember: good philosophy is almost always in the boring middle.

Singularitarians believe in three dogmas. First, that the creation of some form of artificial ultraintelligence is likely in the foreseeable future. This turning point is known as a technological singularity, hence the name. Both the nature of such a superintelligence and the exact timeframe of its arrival are left unspecified, although Singularitarians tend to prefer futures that are conveniently close-enough-to-worry-about but far-enough-not-to-be-around-to-be-proved-wrong.

Second, humanity runs a major risk of being dominated by such ultraintelligence. Third, a primary responsibility of the current generation is to ensure that the Singularity either does not happen or, if it does, that it is benign and will benefit humanity. This has all the elements of a Manichean view of the world: Good fighting Evil, apocalyptic overtones, the urgency of ‘we must do something now or it will be too late’, an eschatological perspective of human salvation, and an appeal to fears and ignorance.

Put all this in a context where people are rightly worried about the impact of idiotic digital technologies on their lives, especially in the job market and in cyberwars, and where mass media daily report new gizmos and unprecedented computer-driven disasters, and you have a recipe for mass distraction: a digital opiate for the masses.

Like all faith-based views, Singularitarianism is irrefutable because, in the end, it is unconstrained by reason and evidence. It is also implausible, since there is no reason to believe that anything resembling intelligent (let alone ultraintelligent) machines will emerge from our current and foreseeable understanding of computer science and digital technologies. Let me explain.

Sometimes, Singularitarianism is presented conditionally. This is shrewd, because the then does follow from the if, and not merely in an ex falso quodlibet sense: if some kind of ultraintelligence were to appear, then we would be in deep trouble. Correct.

Absolutely. But this also holds true for the following conditional: if the Four Horsemen of the Apocalypse were to appear, then we would be in even deeper trouble.

At other times, Singularitarianism relies on a very weak sense of possibility: some form of artificial ultraintelligence could develop, couldn’t it? Yes it could. But this ‘could’ is mere logical possibility – as far as we know, there is no contradiction in assuming the development of artificial ultraintelligence. Yet this is a trick, blurring the immense difference between ‘I could be sick tomorrow’ when I am already feeling unwell, and ‘I could be a butterfly that dreams it’s a human being.’

There is no contradiction in assuming that a dead relative you’ve never heard of has left you $10 million. That could happen. So? Contradictions, like happily married bachelors, aren’t possible states of affairs, but non-contradictions, like extra-terrestrial agents living among us so well-hidden that we never discovered them, can still be dismissed as utterly crazy. In other words, the ‘could’ is not the ‘could happen’ of an earthquake, but the ‘it isn’t true that it couldn’t happen’ of thinking that you are the first immortal human. Correct, but not a reason to start acting as if you will live forever. Unless, that is, someone provides evidence to the contrary, and shows that there is something in our current and foreseeable understanding of computer science that should lead us to suspect that the emergence of artificial ultraintelligence is truly plausible.

Here Singularitarians mix faith and facts, often moved, I believe, by a sincere sense of apocalyptic urgency. They start talking about job losses, digital systems at risk, unmanned drones gone awry and other real and worrisome issues about computational technologies that are coming to dominate human life, from education to employment, from entertainment to conflicts. From this, they jump to being seriously worried about their inability to control their next Honda Civic because it will have a mind of its own. How some nasty ultraintelligent AI will ever evolve autonomously from the computational skills required to park in a tight spot remains unclear. The truth is that climbing on top of a tree is not a small step towards the Moon; it is the end of the journey. What we are going to see are increasingly smart machines able to perform more tasks that we currently perform ourselves.

If all other arguments fail, Singularitarians are fond of throwing in some maths. A favorite reference is Moore’s Law. This is the empirical claim that, in the development of digital computers, the number of transistors on integrated circuits doubles approximately every two years. The outcome has so far been more computational power for less. But things are changing. Technical difficulties in nanotechnology present serious manufacturing challenges. There is, after all, a limit to how small things can get before they simply melt. Moore’s Law no longer holds. Just because something grows exponentially for some time, does not mean that it will continue to do so forever.

Singularitarianism is irresponsibly distracting. It is a rich-world preoccupation, likely to worry people in leisured societies, who seem to forget about real evils oppressing humanity and our planet.

Deeply irritated by those who worship the wrong digital gods, and by their unfulfilled Singularitarian prophecies, disbelievers – AItheists – make it their mission to prove once and for all that any kind of faith in true AI is totally wrong. AI is just computers, computers are just Turing Machines, Turing Machines are merely syntactic engines, and syntactic engines cannot think, cannot know, cannot be conscious. End of story.

AItheists’ faith is as misplaced as the Singularitarians’. Both Churches have plenty of followers in California, where Hollywood sci-fi films, wonderful research universities such as Berkeley, and some of the world’s most important digital companies flourish side by side. This might not be accidental. When there is big money involved, people easily get confused. For example, Google has been buying AI tech companies as if there were no tomorrow, so surely Google must know something about the real chances of developing a computer that can think, that we, outside ‘The Circle’, are missing? Eric Schmidt, Google’s executive chairman, fuelled this view, when he told the Aspen Institute in 2013: ‘Many people in AI believe that we’re close to [a computer passing the Turing Test] within the next five years.’

The Turing test is a way to check whether AI is getting any closer. You ask questions of two agents in another room; one is human, the other artificial; if you cannot tell the difference between the two from their answers, then the robot passes the test. It is a crude test. Think of the driving test: if Alice does not pass it, she is not a safe driver; but even if she does, she might still be an unsafe driver. The Turing test provides a necessary but insufficient condition for a form of intelligence. This is a really low bar. And yet, no AI has ever got over it. More importantly, all programs keep failing in the same way, using tricks developed in the 1960s.

Both Singularitarians and AItheists are mistaken. As Turing clearly stated in the 1950 article that introduced his test, the question ‘Can a machine think?’ is ‘too meaningless to deserve discussion’. This holds true, no matter which of the two Churches you belong to. Yet both Churches continue this pointless debate, suffocating any dissenting voice of reason.

True AI is not logically impossible, but it is utterly implausible. We have no idea how we might begin to engineer it, not least because we have very little understanding of how our own brains and intelligence work. This means that we should not lose sleep over the possible appearance of some ultraintelligence. What really matters is that the increasing presence of ever-smarter technologies is having huge effects on how we conceive of ourselves, the world, and our interactions. The point is not that our machines are conscious, or intelligent, or able to know something as we do. They are not. There are plenty of well-known results that indicate the limits of computation, so-called undecidable problems for which it can be proved that it is impossible to construct an algorithm that always leads to a correct yes-or-no answer.

We know, for example, that our computational machines satisfy the Curry-Howard correspondence, which indicates that proof systems in logic on the one hand and the models of computation on the other, are in fact structurally the same kind of objects, and so any logical limit applies to computers as well. Plenty of machines can do amazing things, including playing checkers, chess and Go and the quiz show Jeopardy better than us. And yet they are all versions of a Turing Machine, an abstract model that sets the limits of what can be done by a computer through its mathematical logic.

Quantum computers are constrained by the same limits, the limits of what can be computed (so-called computable functions). No conscious, intelligent entity is going to emerge from a Turing Machine. The point is that our smart technologies – also thanks to the enormous amount of available data and some very sophisticated programming – are increasingly able to deal with more tasks better than we do, including predicting our behaviors. So we are not the only agents able to perform tasks successfully.

These are ordinary artifacts that outperform us in ever more tasks, despite being no cleverer than a toaster. Their abilities are humbling and make us reevaluate human exceptionality and our special role in the Universe, which remains unique. We thought we were smart because we could play chess. Now a phone plays better than a Grandmaster. We thought we were free because we could buy whatever we wished. Now our spending patterns are predicted by devices as thick as a plank.

What’s the difference? The same as between you and the dishwasher when washing the dishes. What’s the consequence? That any apocalyptic vision of AI can be disregarded.

The success of our technologies depends largely on the fact that, while we were speculating about the possibility of ultraintelligence, we increasingly enveloped the world in so many devices, sensors, applications and data that it became an IT-friendly environment, where technologies can replace us without having any understanding, mental states, intentions, interpretations, emotional states, semantic skills, consciousness, self-awareness or flexible intelligence. Memory (as in algorithms and immense datasets) outperforms intelligence when landing an aircraft, finding the fastest route from home to the office, or discovering the best price for your next fridge.

Digital technologies can do more and more things better than us, by processing increasing amounts of data and improving their performance by analyzing their own output as input for the next operations. AlphaGo, the computer program developed by Google DeepMind, won the boardgame Go against the world’s best player because it could use a database of around 30 million moves and play thousands of games against itself, ‘learning’ how to improve its performance. It is like a two-knife system that can sharpen itself. What’s the difference? The same as between you and the dishwasher when washing the dishes. What’s the consequence? That any apocalyptic vision of AI can be disregarded.

We are and shall remain, for any foreseeable future, the problem, not our technology.

So we should concentrate on the real challenges:

We should make AI environment-friendly. We need the smartest technologies we can build to tackle the concrete evils oppressing humanity and our planet, from environmental disasters to financial crises, from crime, terrorism and war, to famine, poverty, ignorance, inequality and appalling living standards.

We should make AI human-friendly. It should be used to treat people always as ends, never as mere means, to paraphrase Immanuel Kant.

We should make AI’s stupidity work for human intelligence. Millions of jobs will be disrupted, eliminated and created; the benefits of this should be shared by all, and the costs borne by society.

We should make AI’s predictive power work for freedom and autonomy. Marketing products, influencing behaviors, nudging people or fighting crime and terrorism should never undermine human dignity.

And finally, we should make AI make us more human. The serious risk is that we might misuse our smart technologies, to the detriment of most of humanity and the whole planet. Winston Churchill said that ‘we shape our buildings and afterwards our buildings shape us’. This applies to the infosphere and its smart technologies as well.

Singularitarians and AItheists will continue their diatribes about the possibility or impossibility of true AI. We need to be tolerant. But we do not have to engage. As Virgil suggests in Dante’s Inferno: ‘Speak not of them, but look, and pass them by.’ For the world needs some good philosophy, and we need to take care of more pressing problems.”(7)

Cites:

(1) You’ve read the hype, now read the truth

(2) Web Summit 2014 Day One – Gary Marcus

(3) Intelligence: a history

(4) fAIth

(5) Myth Busting Artificial Intelligence

(6) ‘Artificial Intelligence’ was 2016’s fake news

(7) Should we be afraid of AI?