Meet David Hogg – Crisis Actor & Friends

Parkland School Shooting Personality David Hogg Can’t Remember His Lines:

And there’s more:

So ……..Why would these folks be used in this supposed real event?

“This particular event stands out due to all the laughing and smiling by the main students being promoted as angry anti-gun advocates by the media.

It is notable that the mainstream media has quickly embraced a few specific personalities in regards to the Parkland school shooting event:

“Keep in mind that this is the same media that openly promotes wars overseas and even support for cloaked terrorist groups such as the White Helmets in Syria. Thus, they should always be watched with a skeptical eye as they often misreport the truth and rely on triggering an emotional response from their audience.

Information posted online by the students at Parkland and by other individuals reveals some of these children to be involved with the crafts of acting, reporting and directing movies:”(1)

 

 

 

 

Link to

 

 

 

 

 

Facebook page of Jeff Kasky, father of student Cameron Kasky who has been featured in numerous media appearances since the event, shows that his father posted the image below on December 10, 2017. Link:  https://www.facebook.com/jeffkasky

Link to additional Troupe 4879- Mobilizing MSD (Marjory Stoneman Douglas High School) account (established Saturday, February 17, 2018):

https://www.facebook.com/groups/1672529512839350/

Account is administrated by Jessica Goodin:

Link to Jessica Goodin Facebook account:

https://www.facebook.com/jessica.goodin.52?fref=gm&dti=1672529512839350&hc_location=group

Screen capture of front page of account shows that Jessica Goodin is a special effects Make up artist at Universal Studios Florida and Faces By Jessica, as well as studying special effects make up at the Cosmix School of Makeup Artistry:

Keep in mind that Jessica, who created the special effects above according to her own Facebook posts is the administrator for Troupe 4879 that Cameron Kasky works for as an award-winning director according to his own father’s Facebook post on December 10, 2017, as shown above.

The following image from Stoneman Douglas Drama Club confirms the organizations are one and the same:

The next student actor now doing media promotion for the Parkland shooting event is Alex Wind. He has appeared across numerous mainstream media stations in the wake of the event:

Two more examples (along with the one up above) of him on MSM outlets:

 

This is the link to his Facebook page:

https://www.facebook.com/alex.wind.1447/friends?lst=1509910576%3A100002387891632%3A1519270919&source_ref=pb_friends_tl

Images from his Facebook page reveal him to also be a student actor and a close friend of Kasky:

Onto the next student chosen by the media to discuss the Parkland shooting and antigun issues – Delaney Tarr:

Link to Facebook account of Delaney Tarr (image above), the 4th student personality being showcased by the mainstream media in the wake of the Parkland school shooting event reveals that she also has previous experience acting in front of the camera:

https://www.facebook.com/photo.php?fbid=1538279549832343&set=pb.100009509379163.-2207520000.1519430434.&type=3&theater

Other student witness statements that include anomalies related to the February 14, 2018 school shooting. Also included is a clip of CNN’s crying FBI anti-terrorism agent.

Multiple Shooters Participated in Florida School Shooting According to Eyewitness Alexa Miednik:

Second student witness says up to three shooters in Parkland Florida school mass shooting:

CIA Terrorism Expert ‘Cries’ on CNN During Wolf Blitzer Interview About Florida School Shooting:

A poor attempt by this actor:

“Many students and faculty who were at Majority Stoneman Douglas High School in Parkland, Florida, during the Valentine’s Day shooting thought it was just a drill after they were told in advance that role-players would be conducting a fake ‘code red’ which is an active shooter scenario.”(2)

‘Lift The Veil’ channel highlights the media now covering the active shooter drill that was taking place at the school. This looks a lot like damage control:

“Given they already had the make-up and fake wound materials at the school, how hard would it be to FAKE a “real” shooting for gun-control propaganda purposes?

But then folks ask, what about the ACTUAL dead kids?  Well, in the ABC NEWS video below, we find that TV Networks are already placing ADULTS AS FAKE STUDENTS in schools with FAKE IDENTITIES to find out what’s really going on inside schools, and to “address issues” like Bullying, sexuality and other things.   

IF THEY HAVE ALREADY HAVE FAKE STUDENTS WITH FAKE IDENTITIES , FAKING THEIR DEATHS IS EASY!!

WHO BENEFITS?

The 30 second video below shows former Attorney General Eric Holder openly telling schools to “brainwash” people about guns:

“Then there’s the “kids” from Parkland High School in Florida being on TV everywhere with their allegedly “grass roots effort” at Gun Control.  Turns out, though, their “grass roots effort” is actually astroturf!

“Can you believe these kids?” It’s been a recurring theme of the coverage of the Parkland school shooting: the remarkable effectiveness of the high school students who created a gun control organization in the wake of the massacre. In seemingly no time, the magical kids had organized events ranging from a national march to a mass school walkout, and they’d brought in a million dollars in donations from Oprah Winfrey and George Clooney.

The Miami Herald credited their success to the school’s stellar debate program. The Wall Street Journal said it was because they were born online, and organizing was instinctive. It wasn’t. 

On February 28, BuzzFeed came out with the actual story: Rep. Debbie Wassermann Schultz aiding in the lobbying in Tallahassee, a teacher’s union organizing the buses that got the kids there, Michael Bloomberg’s groups and the Women’s March working on the upcoming March For Our Lives, MoveOn.org doing social media promotion and (potentially) march logistics, and training for student activists provided by federally funded Planned Parenthood.

The president of the American Federation of Teachers told BuzzFeed they’re also behind the national school walkout, which journalists had previously assured the public was the sole work of a teenager. (I’d thought teachers were supposed to get kids into school, but maybe that’s just me.)

In other words, the response was professionalized – propagandized. That’s not surprising, because this is what organization that gets results actually looks like. It’s not a bunch of magical kids in somebody’s living room. Nor is it surprising that the professionalization happened right off the bat. Broward County’s teacher’s union is militant, and Rep. Ted Lieu stated on Twitter that his family knows Parkland student activist David Hogg’s family, so there were plenty of opportunities for grown-ups with resources and skills to connect the kids.”(2)

“That’s before you get to whether any of them had been involved in the Women’s March. According to BuzzFeed, Wassermann Schultz was running on day two.

What’s striking about all this isn’t the organization. If you start reading books about organizing, it’s clear how it all works. But no journalist covering the story wrote about this stuff for two weeks. Instead, every story was about the Parkland kids being magically effective.

On Twitter, the number of bluechecks rhapsodizing over how effective the kids’ organizational instincts were. But organizing isn’t instinctive. It’s skilled work; you have to learn how to do it, and it takes really a lot of people. You don’t just get a few magical kids who’re amazing and naturally good at it.

The real tip-off should have been the $500,000 donations from Oprah Winfrey and George Clooney. Big celebrities don’t give huge money to strangers on a whim. Somebody who knows Winfrey and Clooney called them and asked. But the press’s response was to be ever more impressed with the kids. 

For two weeks, journalists abjectly failed in their jobs, which is to tell the public what’s going on. And any of them who had any familiarity with organizing campaigns absolutely knew. Matt Pearce, of the Los Angeles Times, would have been ideally placed to write an excellent article: not only is he an organizer for the Times’s union, he moderated a panel on leftist activism for the LA Times Book Festival and has the appropriate connections in organizing. Instead, he wrote about a school walkout, not what was behind it. (In another article, Pearce defined Delta caving to a pressure campaign’s demands as “finding middle ground.”)

But it’s not just a mainstream media problem. None of the righty outlets writing about Parkland picked up on the clear evidence that professional organizers were backing the Parkland kids, either. Instead, they objected to the front-and-centering of minor kids as unseemly, which does no good: Lefties aren’t going to listen, and it doesn’t educate the Right to counter.

The closest anyone got was Elizabeth Harrington at the Washington Free Beacon, who noted that Clooney’s publicist was booking the kids’ media interviews pro bono, and said that a friend (not Clooney) had asked him to do it. The result of all this is that the average righty does not understand what’s going on in activism, because all they see is what the press covers. The stuff that’s visible. It’s like expecting people in the Stone Age to grok the Roman army by looking at it. 

Which brings us to Who Benefits from this “shooting?”

Notice how guns have stolen the attention from the absolute provable crimes of Clinton and Obama rigging and and trying to steal the election. 

They’re so many crimes that they both committed that literally you could just pick any crime. 

Richard Nixon only dreamed of doing what Obama actually did: Weaponizing the IRS against political opponents. Weaponizing the intelligence against political opponents. 

These criminals are on the run and using every tactic they can to keep their crimes out of the mainstream news which is controlled by [………..you guessed it…..the oligarchs and their cultural Marxists minions.]

So was the “mass-shooting” in Florida real . . . or just one big left-wing propaganda push against guns to divert attention from Clinton and Obama crimes?”(2)

“So there you have it, they just happened to be running a bunch of active shooter drills that day and wearing lots of gory makeup and a crazy kid who looks comatose just happened to show up at the school and shoot a bunch of people for real and then they just happened to fortunately be a bunch of student actors that had well rehearsed lines about how angry they are about guns, except when they were caught smiling for the photo shoots, of course. And then they just luckily became media darlings, you know, that same US media that hate guns for American citizens, but never met a war it didn’t like :), or a child to exploit and promote their nefarious agenda.

So now these acting students turned political activists for “March for Our Lives” are off on the talk show and nationwide rally circuit backed by Hollywood’s millionaires such as George Clooney (White Helmets promoter – the White Helmets are terrorists posing as rescuers by the way), Steven Spielberg, Oprah Winfrey and others Hollywood elitists…..all to help rid America of its guns and give them up to the government……..a government led by a man that the so called ‘left’ refers to as a modern day Hitler. Hmmmm…..somethings not quite adding up here. Oh well, maybe their next performance will be more realistic.”(1)

Cites:

(1) Extensive Post Reveals Drills, Anomalies and Child Actors Involved With Parkland School Shooting in Florida on February 14, 2018 (N.S.F.W.)

(2) VIDEO: FAKE BLOOD, FAKE WOUNDS for “Active-Shooter DRILL” at same High School in Florida, hours BEFORE “actual” shootings which “killed 17”

Artificial Intelligence is Hype

Praise for AI is creeping into more and more Ted talks, YouTube videos and news feeds as each week passes.

The question is, or should be, how much of the hype surrounding artificial intelligence is warranted?

“For 60 years scientists have been announcing that the great AI breakthrough is just around the corner.  All of a sudden many tech journalists and tech business leaders appear convinced that, finally, AI has come into its own.”(1)

“We’ve all been seeing hype and excitement around artificial intelligence, big data, machine learning and deep learning. There’s also a lot of confusion about what they really mean and what’s actually possible today. These terms are used arbitrarily and sometimes interchangeably, which further perpetuates confusion.

So, let’s break down these terms and offer some perspective.

Artificial Intelligence

Artificial Intelligence is a branch of computer science that deals with algorithms inspired by various facets of natural intelligence. It includes performing tasks that normally require human intelligence, such as visual perception, speech recognition, problem solving and language translation. Artificial intelligence can be seen in many every day products, from intelligent personal assistants in your smartphone to the X-box 360 Kinect camera, allowing you to interact with games through body movement. There are also well-known examples of AI that are more experimental, from the self-aware Super Mario to the widely discussed driverless car. Other less commonly discussed examples include the ability to sift through millions of images to pull together notable insights.

Big Data

Big Data is an important part of AI and is defined as extremely large data sets that are so large they cannot be analyzed, searched or interpreted using traditional data processing methods. As a result, they have to be analyzed computationally to reveal patterns, trends, and associations. This computational analysis, for instance, has helped businesses improve customer experience and their bottom line by better understand human behavior and interactions. There are many retailers that now rely heavily on Big Data to help adjust pricing in near-real time for millions of items, based on demand and inventory. However, processing of Big Data to make predictions or decisions like this often requires the use of Machine Learning techniques.

Machine Learning

Machine Learning is a form of artificial intelligence which involves algorithms that can learn from data. Such algorithms operate by building a model based on inputs and using that information to make predictions or decisions, rather than following only explicitly programmed instructions. There are lots of basic decisions that can be performed leveraging machine learning, like Nest with its learning thermostats as one example. Machine Learning is widely used in spam detection, credit card fraud detection, and product recommendation systems, such as with Netflix or Amazon.

Deep Learning

Deep Learning is a class of machine learning techniques that operate by constructing numerous layers of abstraction to help map inputs to classifications more accurately. The abstractions made by Deep Learning methods are often observed as being human like, and the big breakthrough in this field in recent years has been the scale of abstraction that can now be achieved. This, in recent years, has resulted in breakthroughs in computer vision and speech recognition accuracy. Deep Learning is inspired by a simplified model of the way Neural Networks are thought to operate in the brain.

No doubt AI is in a hype cycle these days. Recent breakthroughs in Distributed AI and Deep Learning, paired with the ever-increasing need for deriving value from huge stashes of data being collected in every industry, have helped renew interest in AI. “(5)

Human levels of understanding? Really?

How much of an AI breakthrough has humanity actually achieved, as opposed to wishful thinking.

“Gary Marcus, a psychology professor at New York University, who writes about artificial intelligence for the New Yorker, was the first to burst the balloon. He told Geektime that while the coalescence of parallel computation and big data has led to some exciting results, so-called ‘deeper algorithms’ aren’t really that much different from two decades ago.

In fact, several experts concurred that doing neat things with statistics and big data (which account for many of the recent AI “breakthroughs”) are no substitute for understanding how the human brain actually works.

“Current models of intelligence are still extremely far away from anything resembling human intelligence,” philosopher and scientist Douglas Hofstadter told Geektime.

But why is everyone so excited about computer systems like IBM’s Watson, which beat the best human players on Jeopardy! and has even more recently been  diagnosing disease?

“Watson doesn’t understand anything at all,” said Hofstadter.  “It is just good at grammatical parsing and then searching for text strings in a very large database.”

Similarly, Google Translate understands nothing whatsoever of the sentences that it converts from one language to another, “which is why it often makes horrendous messes of them,” said Hofstadter.”(1)

“In narrow domains like chess, computers are getting exponentially better.

But in some other domains, like strong artificial intelligence, general artificial intelligence, there’s  been almost no progress.

Not many people are fooled into thinking that Siri is an example of general artificial intelligence.

We were promised Rosie the robot and got Roomba, which wanders the room and tries not to bump into anything.

AI actually nearly died in 1973. There was something in Britain, called the Lighthill Report, that was compiled by James Lighthill for the British Science Research Council as an evaluation of the academic research in the field of artificial intelligence.

The report said that artificial intelligence only worked in narrow domains, it’s unlikely to scale up, and that it will have limited applications. This report led basically to the end of funding in British AI research. This was called the first AI winter.

Current systems are still narrow. You have chess computers that can’t do anything else, driverless cars that can’t do anything else. There are language translators that are really good at translating languages, but are not perfect, often having a problem with syntax, and they can’t actually answer questions about what they’re translating.

What you end up having in AI is a community of idiot savants, with special service programs that do one thing, but aren’t general.

Watson is probably the most impressive in some ways, but as with most artificial intelligence systems that actually work, there’s a hidden restriction that makes it look a lot easier than it appears to be. When you look at Watson, you think, it knows everything, it can look it up really quickly, but it turns out that 95% of all of the Jeopardy questions that it’s trying to answer are the titles of Wikipedia pages. It’s basically searching the Wikipedia pages, but seems like a general intelligence, but it isn’t one.

IBM is still struggling to figure out what to do with it.

Your average teenager can pick up a new video game after an hour or two of practice or learn plenty of other skills.

The closest we have to that in AI is the company Deep Mind, which Google bought in 2014. It’s a system that can do general purpose learning of a limited sort. Its actually better than humans in a few video games and in the game.

We’re still a long way from machines that can master a wide range of tasks, understand something like Wikipedia and be able to learn for itself.

We were promised human level intelligence and what we got were things like ‘key word’ searches. Anyone who’s done searches on Google has run into the limitations of this level of processing.

The trouble with Big Data is that it’s all correlation and no causation:

You can always go and find correlations, but just finding correlations, which is what Big Data does, if it’s done in an unsophisticated way, doesn’t necessarily give you the right answer.

It’s important to realize that children don’t just care about correlations. They want to know why things are correlated:

Children are asking questions. Big Data is just collecting data.

AI’s roots were in trying to understand human intelligence. Hardly anybody talks about human intelligence anymore. They just talk about getting a big data base and run a particular function on it.”(2)

“In Marcus’ view, the only route to true machine intelligence is to begin with a better understanding of human intelligence. Big data will only get you so far, because it’s all correlation and no causation.

When children begin learning about the world, they don’t need big data. That’s because their brains are understanding why one thing causes another. That process only requires “small data,” says Marcus. “They don’t need exabytes of data to learn a lot.”

“My 22-month old is already more sophisticated than the best robots in the world in digging through bad toys and finding something new.”

Marcus offers several examples of aspects of human intelligence that we need to gain a better understanding of if we are want to build intelligent machines. For instance, a human being who looks at the following picture will be able to guess what happens next:

No machine can.”(1)

Below is an image of a Goose on a lake and there’s a detail there that looks like a car:

“It you take a Deep Learning algorithm and have it look at a picture like this, it might do what’s called a false alarm. It might say it sees a duck and it sees a car there too. You, as a human being, know that there’s not a car in the lake. So, if you have common sense, you use that a part of your analysis of the image and you don’t usually get fooled.

Try doing a search on the following: ‘Which is closer Paris or Saturn’, and see what you get for search results. Any child should be able to answer that question, but for most search engines, you will just get various links to info about Saturn and some links to info about Paris.

Natural Language

There’s a kind of sentence called a generic, which is things like ‘triangles have three sides’. What is meant by generic here is the triangles have three sides in general.

But is can be looser than that. For example one can say ‘dogs have four legs’. Most dogs have four legs, but they don’t all, since most people have seen three legged dogs.

The point here is that you can make sense of that statement. You can read an encyclopedia and make inferences about it.

How about ‘ducks lay eggs’? Well, this isn’t even true of most ducks since half the ducks are male and they don’t lay eggs, and some of the ducks are too young or too old or have a disorder, and they don’t lay eggs. So maybe only 30% of ducks actually lay eggs, but you understand it, you get it, you can think about it. We can make inferences, even though we don’t have a statistically reliable truth.

Children are able to understand this but machines aren’t.

The field of AI is hyped up to be further along than it actually is. There’s been little progress on making genuinely smart machines. Statistics and Big Data, as popular as they are, are never going to get us all the way there by themselves.

The only route to true machine intelligence is going to begin with a better understanding of human intelligence.”(2)

The Ideology of AI

“If computers don’t actually even think in the human sense, then why do the media and high-tech business leaders seem so eager to jump the gun? Why would they have us believe that robots are about to surpass us?

Perhaps many of us actually want computers to be smarter than humans because it’s an appealing fantasy. If robots are at parity with humans, then we can define down what it means to be human — we’re just an obsolete computer program — and all the vexing, painful questions like why do we suffer, why do we die, how should we live? become irrelevant.

It also justifies a world in which we put algorithms on a pedestal and believe they will solve all our problems. Jaron Lanier compares it to a religion:

“In the history of organized religion,” he told Edge.org, “it’s often been the case that people have been disempowered precisely to serve what were perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity.”

“That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else…contributing mostly to the fortunes of whoever runs the top computers. The new elite might say, “Well, but they’re helping the AI, it’s not us, they’re helping the AI. The effect of the new religious idea of AI is a lot like the economic effect of the old idea, religion.”(1)

The Mythic Singularity

“Why is religious language so pervasive in AI and transhumanist circles?

The odd thing about the anti-clericalism in the AI community is that religious language runs wild in its ranks, and in how the media reports on it. There are AI ‘oracles’ and technology ‘evangelists’ of a future that’s yet to come, plus plenty of loose talk about angels, gods and the apocalypse.

Ray Kurzweil, an executive at Google, is regularly anointed a ‘prophet’ by the media – sometimes as a prophet of a coming wave of ‘superintelligence’ (a sapience surpassing any human’s capability); sometimes as a ‘prophet of doom’ (thanks to his pronouncements about the dire prospects for humanity); and often as a soothsayer of the ‘singularity’ (when humans will merge with machines, and as a consequence live forever).

The tech folk who also invoke these metaphors and tropes operate in overtly and almost exclusively secular spaces, where rationality is routinely pitched against religion. But believers in a ‘transhuman’ future – in which AI will allow us to transcend the human condition once and for all – draw constantly on prophetic and end-of-days narratives to understand what they’re striving for.

From its inception, the technological singularity has represented a mix of otherworldly hopes and fears. The modern concept has its origin in 1965, when Gordon Moore, later the co-founder of Intel, observed that the number of transistors you could fit on a microchip was doubling roughly every 12 months or so. This became known as Moore’s Law: the prediction that computing power would grow exponentially until at least the early 2020s, when transistors would become so small that quantum interference is likely to become an issue.

‘Singularitarians’ have picked up this thinking and run with it. In Speculations Concerning the First Ultraintelligent Machine (1965), the British mathematician and cryptologist I J Good offered this influential description of humanity’s technological inflection point:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion’, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

These meditations are shot through with excitement but also the very old anxiety about humans’ impending obsolescence. Kurzweil has said that Moore’s Law expresses a universal ‘Law of Accelerating Returns’ as nature moves towards greater and greater order. He predicts that computers will first reach the level of human intelligence, before rapidly surpassing it in a recursive, self-improving spiral.

When the singularity is conceived as an entity or being, the questions circle around what it would mean to communicate with a non-human creature that is omniscient, omnipotent, possibly even omnibenevolent. This is a problem that religious believers have struggled with for centuries, as they quested towards the mind of God.

In the 13th century, Thomas Aquinas argued for the importance of a passionate search for a relationship and shaped it into a Christian prayer: ‘Grant me, O Lord my God, a mind to know you, a heart to seek you, wisdom to find you …’ Now, in online forums, rationalist ‘singularitarians’ debate what such a being would want and how it would go about getting it, sometimes driving themselves into a state of existential distress at the answers they find.

A god-like being of infinite knowing (the singularity); an escape of the flesh and this limited world (uploading our minds); a moment of transfiguration or ‘end of days’ (the singularity as a moment of rapture); prophets (even if they work for Google); demons and hell (even if it’s an eternal computer simulation of suffering), and evangelists who wear smart suits (just like the religious ones do). Consciously and unconsciously, religious ideas are at work in the narratives of those discussing, planning, and hoping for a future shaped by AI.

The stories and forms that religion takes are still driving the aspirations we have for AI. What lies behind this strange confluence of narratives? The likeliest explanation is that when we try to describe the ineffable – the singularity, the future itself – even the most secular among us are forced to reach for a familiar metaphysical lexicon. When trying to think about interacting with another intelligence, when summoning that intelligence, and when trying to imagine the future that such an intelligence might foreshadow, we fall back on old cultural habits. The prospect creating an AI invites us to ask about the purpose and meaning of being human: what a human is for in a world where we are not the only workers, not the only thinkers, not the only conscious agents shaping our destiny.”(4)

Superior Intelligence and Rogue AI

“In Aristotle’s book, The Politics, he explains: ‘[T]hat some should rule and others be ruled is a thing not only necessary, but expedient; from the hour of their birth, some are marked out for subjection, others for rule.’ What marks the ruler is their possession of ‘the rational element’. Educated men have this the most, and should therefore naturally rule over women – and also those men ‘whose business is to use their body’ and who therefore ‘are by nature slaves’. Lower down the ladder still are non-human animals, who are so witless as to be ‘better off when they are ruled by man’.

So at the dawn of Western philosophy, we have intelligence identified with the European, educated, male human. It becomes an argument for his right to dominate women, the lower classes, uncivilized peoples and non-human animals.

Needless to say, more than 2,000 years later, the train of thought that these men set in motion has yet to be derailed.

The late Australian philosopher and conservationist Val Plumwood has argued that the giants of Greek philosophy set up a series of linked dualisms that continue to inform our thought. Opposing categories such as intelligent/stupid, rational/emotional and mind/body are linked, implicitly or explicitly, to others such as male/female, civilised/primitive, and human/animal. These dualisms aren’t value-neutral, but fall within a broader dualism, as Aristotle makes clear: that of dominant/subordinate or master/slave. Together, they make relationships of domination, such as patriarchy or slavery, appear to be part of the natural order of things.

According to Kant, the reasoning being – today, we’d say the intelligent being – has infinite worth or dignity, whereas the unreasoning or unintelligent one has none. His arguments are more sophisticated, but essentially he arrives at the same conclusion as Aristotle: there are natural masters and natural slaves, and intelligence is what distinguishes them.

This line of thinking was extended to become a core part of the logic of colonialism. The argument ran like this: non-white peoples were less intelligent; they were therefore unqualified to rule over themselves and their lands. It was therefore perfectly legitimate – even a duty, ‘the white man’s burden’ – to destroy their cultures and take their territory. In addition, because intelligence defined humanity, by virtue of being less intelligent, these peoples were less human. They therefore did not enjoy full moral standing – and so it was perfectly fine to kill or enslave them.

So when we reflect upon how the idea of intelligence has been used to justify privilege and domination throughout more than 2,000 years of history, is it any wonder that the imminent prospect of super-smart robots fills us with dread?

From 2001: A Space Odyssey to the Terminator films, writers have fantasized about machines rising up against us. Now we can see why. If we’re used to believing that the top spots in society should go to the brainiest, then of course we should expect to be made redundant by bigger-brained robots and sent to the bottom of the heap. If we’ve absorbed the idea that the more intelligent can colonize the less intelligent as of right, then it’s natural that we’d fear enslavement by our super-smart creations. If we justify our own positions of power and prosperity by virtue of our intellect, it’s understandable that we see superior AI as an existential threat.

Natural stupidity, rather than artificial intelligence, remains the greatest risk.

This narrative of privilege might explain why, as the New York-based scholar and technologist Kate Crawford has noted, the fear of rogue AI seems predominant among Western white men. Other groups have endured a long history of domination by self-appointed superiors, and are still fighting against real oppressors. White men, on the other hand, are used to being at the top of the pecking order. They have most to lose if new entities arrive that excel in exactly those areas that have been used to justify male superiority.

I don’t mean to suggest that all our anxiety about rogue AI is unfounded. There are real risks associated with the use of advanced AI (as well as immense potential benefits). But being oppressed by robots in the way that, say, Australia’s indigenous people have been oppressed by European colonists is not number one on the list.

We would do better to worry about what humans might do with AI, rather than what it might do by itself. We humans are far more likely to deploy intelligent systems against each other, or to become over-reliant on them. As in the fable of the sorcerer’s apprentice, if AIs do cause harm, it’s more likely to be because we give them well-meaning but ill-thought-through goals – not because they wish to conquer us. Natural stupidity, rather than artificial intelligence, remains the greatest risk.”(3)

Consumers Don’t Want It

“2016 and 2017 saw “AI” being deployed on consumers experimentally, tentatively, and the signs are already there for anyone who cares to see. It hasn’t been a great success.

The most hyped manifestation of better language processing is chatbots. Chatbots are the new UX, many including Microsoft and Facebook hope. Oren Etzoni at Paul Allen’s Institute predicts it will become a “trillion dollar industry” But he also admits “my 4 YO is far smarter than any AI program I ever met”.

Hmmm, thanks Oren. So what you’re saying is that we must now get used to chatting with someone dumber than a four year old, just because they can make software act dumber than a four year old.

Put it this way. How many times have you rung a call center recently and wished that you’d spoken to someone even more thick, or rendered by processes even more incapable of resolving the dispute, than the minimum-wage offshore staffer who you actually spoke with? When the chatbots come, as you close the [X] on another fantastically unproductive hour wasted, will you cheerfully console yourself with the thought: “That was terrible, but least MegaCorp will make higher margins this year! They’re at the cutting edge of AI!”?

In a healthy and competitive services marketplace, bad service means lost business. The early adopters of AI chatbots will discover this the hard way. There may be no later adopters once the early adopters have become internet memes for terrible service.

The other area where apparently impressive feats of “AI” were unleashed upon the public were subtle. Unbidden, unwanted AI “help” is starting to pop out at us. Google scans your personal photos and later, if you have an Android phone, will pop up “helpful” reminders of where you have been. People almost universally find this creepy. We could call this a “Clippy The Paperclip” problem, after the intrusive Office Assistant that only wanted to help. Clippy is going to haunt AI in 2017. This is actually going to be worse than anybody inside the AI cult quite realizes.

The successful web services today so far are based on an economic exchange. The internet giants slurp your data, and give you free stuff. We haven’t thought more closely about what this data is worth. For the consumer, however, these unsought AI intrusions merely draw our attention to how intrusive the data slurp really is. It could wreck everything. Has nobody thought of that?

AI Is a Make Believe World Populated By Mad People

The AI hype so far has relied on a collusion between two groups of people: a supply side and a demand side. The technology industry, the forecasting industry and researchers provide a limitless supply of post-human hype.

The demand comes from the media and political classes, now unable or unwilling to engage in politics with the masses, to indulge in wild fantasies about humans being replaced by robots. The latter reflects a displacement activity: the professions are already surrendering autonomy in their work to technocratic managerialism. They’ve made robots out of themselves – and now fear being replaced by robots.

There’s a cultural gulf between AI’s promoters and the public that Asperger’s alone can’t explain. There’s no polite way to express this, but AI belongs to California’s inglorious tradition of generating cults, and incubating cult-like thinking. Most people can name a few from the hippy or post-hippy years – EST, or the Family, or the Symbionese Liberation Army – but actually, Californians have been it at it longer than anyone realizes.

There’s nothing at all weird about Mark.

Move along and please tip the Chatbot.

Today, that spirit lives on Silicon Valley, where creepy billionaire nerds like Mark Zuckerberg and Elon Musk can fulfil their desires to “play God and be amazed by magic”, the two big things they miss from childhood. Look at Zuckerberg’s house, for example. What these people want is not what you or I want. I’d be wary of them running an after school club.”(6)

Should We Be Afraid of AI?

“Suppose you enter a dark room in an unknown building. You might panic about monsters that could be lurking in the dark. Or you could just turn on the light, to avoid bumping into furniture. The dark room is the future of artificial intelligence (AI). Unfortunately, many people believe that, as we step into the room, we might run into some evil, ultra-intelligent machines. This is an old fear. It dates to the 1960s, when Irving John Good, a British mathematician who worked as a cryptologist at Bletchley Park with Alan Turing, made the following observation:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion’, and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.

Once ultraintelligent machines become a reality, they might not be docile at all but behave like Terminator: enslave humanity as a sub-species, ignore its rights, and pursue their own ends, regardless of the effects on human lives.

If this sounds incredible, you might wish to reconsider. Fast-forward half a century to now, and the amazing developments in our digital technologies have led many people to believe that Good’s ‘intelligence explosion’ is a serious risk, and the end of our species might be near, if we’re not careful. This is Stephen Hawking in 2014:

The development of full artificial intelligence could spell the end of the human race.

Last year, Bill Gates was of the same view:

I am in the camp that is concerned about superintelligence. First the machines will do a lot of jobs for us and not be superintelligent. That should be positive if we manage it well. A few decades after that, though, the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this, and don’t understand why some people are not concerned.

And what had Musk, Tesla’s CEO, said?

We should be very careful about artificial intelligence. If I were to guess what our biggest existential threat is, it’s probably that… Increasingly, scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence, we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like, yeah, he’s sure he can control the demon. Didn’t work out.

The reality is more trivial. This March, Microsoft introduced Tay – an AI-based chat robot – to Twitter. They had to remove it only 16 hours later. It was supposed to become increasingly smarter as it interacted with humans. Instead, it quickly became an evil Hitler-loving, Holocaust-denying, incestual-sex-promoting, ‘Bush did 9/11’-proclaiming chatterbox. Why? Because it worked no better than kitchen paper, absorbing and being shaped by the nasty messages sent to it. Microsoft apologised.

This is the state of AI today. After so much talking about the risks of ultraintelligent machines, it is time to turn on the light, stop worrying about sci-fi scenarios, and start focusing on AI’s actual challenges, in order to avoid making painful and costly mistakes in the design and use of our smart technologies.

The current debate about AI is a dichotomy between those who believe in true AI and those who do not. Yes, the real thing, not Siri in your iPhone, Roomba in your living room, or Nest in your kitchen. Think instead of the false Maria in Metropolis (1927); Hal 9000 in 2001: A Space Odyssey (1968), on which Good was one of the consultants; C3PO in Star Wars (1977); Rachael in Blade Runner (1982); Data in Star Trek: The Next Generation (1987); Agent Smith in The Matrix (1999) or the disembodied Samantha in Her (2013). You’ve got the picture. Believers in true AI and in Good’s ‘intelligence explosion’ belong to the Church of Singularitarians. For lack of a better term, disbelievers will be referred to as members of the Church of AItheists. Let’s have a look at both faiths and see why both are mistaken. And meanwhile, remember: good philosophy is almost always in the boring middle.

Singularitarians believe in three dogmas. First, that the creation of some form of artificial ultraintelligence is likely in the foreseeable future. This turning point is known as a technological singularity, hence the name. Both the nature of such a superintelligence and the exact timeframe of its arrival are left unspecified, although Singularitarians tend to prefer futures that are conveniently close-enough-to-worry-about but far-enough-not-to-be-around-to-be-proved-wrong.

Second, humanity runs a major risk of being dominated by such ultraintelligence. Third, a primary responsibility of the current generation is to ensure that the Singularity either does not happen or, if it does, that it is benign and will benefit humanity. This has all the elements of a Manichean view of the world: Good fighting Evil, apocalyptic overtones, the urgency of ‘we must do something now or it will be too late’, an eschatological perspective of human salvation, and an appeal to fears and ignorance.

Put all this in a context where people are rightly worried about the impact of idiotic digital technologies on their lives, especially in the job market and in cyberwars, and where mass media daily report new gizmos and unprecedented computer-driven disasters, and you have a recipe for mass distraction: a digital opiate for the masses.

Like all faith-based views, Singularitarianism is irrefutable because, in the end, it is unconstrained by reason and evidence. It is also implausible, since there is no reason to believe that anything resembling intelligent (let alone ultraintelligent) machines will emerge from our current and foreseeable understanding of computer science and digital technologies. Let me explain.

Sometimes, Singularitarianism is presented conditionally. This is shrewd, because the then does follow from the if, and not merely in an ex falso quodlibet sense: if some kind of ultraintelligence were to appear, then we would be in deep trouble. Correct.

Absolutely. But this also holds true for the following conditional: if the Four Horsemen of the Apocalypse were to appear, then we would be in even deeper trouble.

At other times, Singularitarianism relies on a very weak sense of possibility: some form of artificial ultraintelligence could develop, couldn’t it? Yes it could. But this ‘could’ is mere logical possibility – as far as we know, there is no contradiction in assuming the development of artificial ultraintelligence. Yet this is a trick, blurring the immense difference between ‘I could be sick tomorrow’ when I am already feeling unwell, and ‘I could be a butterfly that dreams it’s a human being.’

There is no contradiction in assuming that a dead relative you’ve never heard of has left you $10 million. That could happen. So? Contradictions, like happily married bachelors, aren’t possible states of affairs, but non-contradictions, like extra-terrestrial agents living among us so well-hidden that we never discovered them, can still be dismissed as utterly crazy. In other words, the ‘could’ is not the ‘could happen’ of an earthquake, but the ‘it isn’t true that it couldn’t happen’ of thinking that you are the first immortal human. Correct, but not a reason to start acting as if you will live forever. Unless, that is, someone provides evidence to the contrary, and shows that there is something in our current and foreseeable understanding of computer science that should lead us to suspect that the emergence of artificial ultraintelligence is truly plausible.

Here Singularitarians mix faith and facts, often moved, I believe, by a sincere sense of apocalyptic urgency. They start talking about job losses, digital systems at risk, unmanned drones gone awry and other real and worrisome issues about computational technologies that are coming to dominate human life, from education to employment, from entertainment to conflicts. From this, they jump to being seriously worried about their inability to control their next Honda Civic because it will have a mind of its own. How some nasty ultraintelligent AI will ever evolve autonomously from the computational skills required to park in a tight spot remains unclear. The truth is that climbing on top of a tree is not a small step towards the Moon; it is the end of the journey. What we are going to see are increasingly smart machines able to perform more tasks that we currently perform ourselves.

If all other arguments fail, Singularitarians are fond of throwing in some maths. A favorite reference is Moore’s Law. This is the empirical claim that, in the development of digital computers, the number of transistors on integrated circuits doubles approximately every two years. The outcome has so far been more computational power for less. But things are changing. Technical difficulties in nanotechnology present serious manufacturing challenges. There is, after all, a limit to how small things can get before they simply melt. Moore’s Law no longer holds. Just because something grows exponentially for some time, does not mean that it will continue to do so forever.

Singularitarianism is irresponsibly distracting. It is a rich-world preoccupation, likely to worry people in leisured societies, who seem to forget about real evils oppressing humanity and our planet.

Deeply irritated by those who worship the wrong digital gods, and by their unfulfilled Singularitarian prophecies, disbelievers – AItheists – make it their mission to prove once and for all that any kind of faith in true AI is totally wrong. AI is just computers, computers are just Turing Machines, Turing Machines are merely syntactic engines, and syntactic engines cannot think, cannot know, cannot be conscious. End of story.

AItheists’ faith is as misplaced as the Singularitarians’. Both Churches have plenty of followers in California, where Hollywood sci-fi films, wonderful research universities such as Berkeley, and some of the world’s most important digital companies flourish side by side. This might not be accidental. When there is big money involved, people easily get confused. For example, Google has been buying AI tech companies as if there were no tomorrow, so surely Google must know something about the real chances of developing a computer that can think, that we, outside ‘The Circle’, are missing? Eric Schmidt, Google’s executive chairman, fuelled this view, when he told the Aspen Institute in 2013: ‘Many people in AI believe that we’re close to [a computer passing the Turing Test] within the next five years.’

The Turing test is a way to check whether AI is getting any closer. You ask questions of two agents in another room; one is human, the other artificial; if you cannot tell the difference between the two from their answers, then the robot passes the test. It is a crude test. Think of the driving test: if Alice does not pass it, she is not a safe driver; but even if she does, she might still be an unsafe driver. The Turing test provides a necessary but insufficient condition for a form of intelligence. This is a really low bar. And yet, no AI has ever got over it. More importantly, all programs keep failing in the same way, using tricks developed in the 1960s.

Both Singularitarians and AItheists are mistaken. As Turing clearly stated in the 1950 article that introduced his test, the question ‘Can a machine think?’ is ‘too meaningless to deserve discussion’. This holds true, no matter which of the two Churches you belong to. Yet both Churches continue this pointless debate, suffocating any dissenting voice of reason.

True AI is not logically impossible, but it is utterly implausible. We have no idea how we might begin to engineer it, not least because we have very little understanding of how our own brains and intelligence work. This means that we should not lose sleep over the possible appearance of some ultraintelligence. What really matters is that the increasing presence of ever-smarter technologies is having huge effects on how we conceive of ourselves, the world, and our interactions. The point is not that our machines are conscious, or intelligent, or able to know something as we do. They are not. There are plenty of well-known results that indicate the limits of computation, so-called undecidable problems for which it can be proved that it is impossible to construct an algorithm that always leads to a correct yes-or-no answer.

We know, for example, that our computational machines satisfy the Curry-Howard correspondence, which indicates that proof systems in logic on the one hand and the models of computation on the other, are in fact structurally the same kind of objects, and so any logical limit applies to computers as well. Plenty of machines can do amazing things, including playing checkers, chess and Go and the quiz show Jeopardy better than us. And yet they are all versions of a Turing Machine, an abstract model that sets the limits of what can be done by a computer through its mathematical logic.

Quantum computers are constrained by the same limits, the limits of what can be computed (so-called computable functions). No conscious, intelligent entity is going to emerge from a Turing Machine. The point is that our smart technologies – also thanks to the enormous amount of available data and some very sophisticated programming – are increasingly able to deal with more tasks better than we do, including predicting our behaviors. So we are not the only agents able to perform tasks successfully.

These are ordinary artifacts that outperform us in ever more tasks, despite being no cleverer than a toaster. Their abilities are humbling and make us reevaluate human exceptionality and our special role in the Universe, which remains unique. We thought we were smart because we could play chess. Now a phone plays better than a Grandmaster. We thought we were free because we could buy whatever we wished. Now our spending patterns are predicted by devices as thick as a plank.

What’s the difference? The same as between you and the dishwasher when washing the dishes. What’s the consequence? That any apocalyptic vision of AI can be disregarded.

The success of our technologies depends largely on the fact that, while we were speculating about the possibility of ultraintelligence, we increasingly enveloped the world in so many devices, sensors, applications and data that it became an IT-friendly environment, where technologies can replace us without having any understanding, mental states, intentions, interpretations, emotional states, semantic skills, consciousness, self-awareness or flexible intelligence. Memory (as in algorithms and immense datasets) outperforms intelligence when landing an aircraft, finding the fastest route from home to the office, or discovering the best price for your next fridge.

Digital technologies can do more and more things better than us, by processing increasing amounts of data and improving their performance by analyzing their own output as input for the next operations. AlphaGo, the computer program developed by Google DeepMind, won the boardgame Go against the world’s best player because it could use a database of around 30 million moves and play thousands of games against itself, ‘learning’ how to improve its performance. It is like a two-knife system that can sharpen itself. What’s the difference? The same as between you and the dishwasher when washing the dishes. What’s the consequence? That any apocalyptic vision of AI can be disregarded.

We are and shall remain, for any foreseeable future, the problem, not our technology.

So we should concentrate on the real challenges:

We should make AI environment-friendly. We need the smartest technologies we can build to tackle the concrete evils oppressing humanity and our planet, from environmental disasters to financial crises, from crime, terrorism and war, to famine, poverty, ignorance, inequality and appalling living standards.

We should make AI human-friendly. It should be used to treat people always as ends, never as mere means, to paraphrase Immanuel Kant.

We should make AI’s stupidity work for human intelligence. Millions of jobs will be disrupted, eliminated and created; the benefits of this should be shared by all, and the costs borne by society.

We should make AI’s predictive power work for freedom and autonomy. Marketing products, influencing behaviors, nudging people or fighting crime and terrorism should never undermine human dignity.

And finally, we should make AI make us more human. The serious risk is that we might misuse our smart technologies, to the detriment of most of humanity and the whole planet. Winston Churchill said that ‘we shape our buildings and afterwards our buildings shape us’. This applies to the infosphere and its smart technologies as well.

Singularitarians and AItheists will continue their diatribes about the possibility or impossibility of true AI. We need to be tolerant. But we do not have to engage. As Virgil suggests in Dante’s Inferno: ‘Speak not of them, but look, and pass them by.’ For the world needs some good philosophy, and we need to take care of more pressing problems.”(7)

Cites:

(1) You’ve read the hype, now read the truth

(2) Web Summit 2014 Day One – Gary Marcus

(3) Intelligence: a history

(4) fAIth

(5) Myth Busting Artificial Intelligence

(6) ‘Artificial Intelligence’ was 2016’s fake news

(7) Should we be afraid of AI?

Bitcoin Is a Trojan Horse

By now almost everybody on the Earth has heard about Bitcoin since it’s constantly in the news, with it’s price rising rapidly.

The aim here is to give an overview of Bitcoin and then take a critical look at it.

Some common questions about Bitcoin:

“What the hell is it?

In the most general sense, bitcoin is software that forms a decentralized, peer-to-peer payment system with no central authority like the Federal Reserve or U.S. Treasury. It’s fair to call it a digital currency or cryptocurrency, but at the moment, most investors aren’t really using it as currency to pay for things. Instead, they’re using it as a speculative investment to buy in the hope of turning a profit. Maybe a big profit. (And maybe a big loss).

What backs or supports it? 

Bitcoin runs on something called blockchain, which is a software system often described as an immutable digital “ledger.” It resides on thousands of computers, all over the world, maintained by a mix of ordinary people and more sophisticated computer experts, known collectively as miners. Yahoo Finance’s Jared Blikre dabbles as a bitcoin miner, running mining software in the background on his laptop. Here’s how much bitcoin he has generated so far: 0.000000071589. At the current rate, it would take him about 1,200 years to mine one complete bitcoin. That gives you a sense of how complex it is to mine bitcoin, and how much processing power it takes: These computerized mining rigs throw off so much energy that they can heat your home.

 

 

 

 

 

 

Who’s running the show?

Bitcoin is decentralized, which means there isn’t one arbiter, central party or institution in charge. Blocks of transactions are validated on the blockchain network through computing “consensus,” which is a feature of the software. Bitcoin was created by someone in 2009 using the pseudonym Satoshi Nakamoto, but it isn’t known who that was, and that person or group doesn’t have control over bitcoin today. [Can you say suspicious?]

What is there to value? 

The price of bitcoin fluctuates based on buying and selling, just like a stock, but there’s a ton of debate over what the price represents. In theory, the value of bitcoin should reflect investors’ faith in bitcoin as a technology. But in reality, investors mostly see bitcoin as a commodity because of its finite supply. Under Satoshi’s blueprint, the total supply of bitcoin will eventually be capped at 21 million coins. At the moment [12/19/17] , 16.7 million bitcoins have been created. A fractional amount of new coins gets created every time a miner uploads a block to the blockchain, which is a reward for mining.

Is this a scam? 

It’s not a scam, in the sense of somebody marketing a bogus product. Bitcoin is a legitimate technology. The question is how useful and valuable it will become.

Is value completely determined by the free market? 

For the most part, yes. There’s a known and limited supply of bitcoin, so when demand goes up, so does the price. Technical innovation also contributes to bitcoin’s value. It was a novelty when first created in 2009, and the market has determined (for now) that it’s an invention that’s worth something.

If it’s virtual, can’t people make duplicates?

Yes, but that’s not a problem. All bitcoin transactions are stored on that public ledger, the blockchain. You can copy the blockchain, but it’s just a record. So you wouldn’t be changing the distribution of bitcoin. To process new transactions in bitcoin, miners with powerful computers solve complex problems that add the transactions in a block to the blockchain. This is called “proof of work” and is one of the core features of most cryptocurrencies. Multiple miners verify the work, which prevents fraud.

Is this a legal tender? 

Not officially yet in the United States. “Legal tender” means the laws of a state or nation require any creditor to accept the currency toward payment of a debt. In the United States, for instance, merchants must accept the U.S. dollar, which makes it legal tender. The U.S. government allows transactions in bitcoin, but doesn’t require every nail salon, car dealership or restaurant to accept it. They do have to accept dollars. Meanwhile, Japan and Australia, among other countries, have officially recognized bitcoin as legal currency. 

What is the collateral behind bitcoin? 

Nothing! The bitcoin blockchain records the entire transaction history of all bitcoin, which is validated through proof of work. That’s not collateral, however. There’s no other tangible asset backing bitcoin, the way a car serves as collateral for a car loan or a building serves as collateral for a commercial property loan.

Who keeps track of each bitcoin? 

All of the miners who maintain the system.

How do you buy and sell it? 

There are a number of easy-to-use exchanges now where you can buy bitcoin using money transferred from a bank account, and in some cases by charging a credit card. The most popular mainstream option is Coinbase, which now has more than 13 million customers. Kraken is another one.

What are you actually buying? 

You’re buying a digital “key,” which is a string of numbers and letters that gives you a unique claim on the blockchain supporting bitcoin. You can transfer this asset to others for whatever the market price of bitcoin is, minus transaction fees.

Can it be traced back to you?

Yes. Anyone who buys or sells bitcoin on an exchange such as Coinbase must provide their personal information to that exchange. If law-enforcement agencies or the IRS need to know something about you, the exchange will have to provide the info under the same laws that govern banks or brokerages. But your personal info does not become part of the blockchain and is not visible to miners maintaining the blockchain.

Are bitcoins real money? And can I cash them in whenever I want? 

Bitcoin has value that can be converted into ordinary currency, or used to make purchases from sellers that accept bitcoin. So in that sense, it’s real money, and it will remain real money as long as there’s a market with people willing to buy it. To “cash in” bitcoin, you need to sell it to somebody, in exchange for dollars or some other currency. Exchanges that handle such transactions have experienced frequent outages that prevent some people from accessing their accounts or executing a trade for a period of time, especially when are there large movements in the price of bitcoin. So don’t assume you’ll be able to sell any time you want.  

What is the value based on, besides scarcity? 

What buyers and sellers think bitcoin is worth. In other words, a lot of psychology.

How are Bitcoins stolen? 

The bitcoin blockchain itself is very secure, but bitcoins can be stolen from an account if thieves are able to log into your account and send the bitcoin to another account they control. Once bitcoin is transferred, it can’t be recovered. Thieves typically break into other people’s accounts by stealing logon and password info. That makes it extremely important to use all possible measures to safeguard a bitcoin account, including two-factor authentication with a mobile phone. You also have a “private key,” which is a third layer of security that you might need at some point, if there are questions about who’s logging into your account. This key is typically a string of keyboard characters that should be stored where it can’t be lost or stolen or accessed through the internet.

How does bitcoin generate revenue? 

Bitcoin itself doesn’t generate revenue. It’s best thought of as a commodity, similar to gold, that has a market price but doesn’t generate economic activity, the way a business does. When the value goes up, bitcoin can create profits. But when the value goes down, it can also create losses.

Miners earn money–paid in bitcoin–for creating bitcoin, which helps cover the cost of time and computer power that the process requires. They also earn small transaction fees from bitcoin users.

What’s the difference between bitcoin and other cryptocurrencies? 

That depends which currency you want to know about, and there are hundreds of them now. (Yahoo Finance recently added full data and charts for 105 of them.) Some coins, like bitcoin cash, bitcoin gold or litecoin, resulted from forks of the main bitcoin code. Then there are coins that run on their own blockchain, like ether (the token of the ethereum network) or XRP (the token of the ripple network).

Do you have to report bitcoins to the IRS? 

The IRS considers bitcoin to be the equivalent of property, with profits (or losses) taxed more or less the same as the proceeds from a sale of stock. The IRS recently won a court ruling against Coinbase that requires the exchange to report information on customers who had more than $20,000 in annual transactions from 2013 to 2015. It seems inevitable that the IRS will treat profits and losses from cryptocurrency bets the same as it treats other investment income.

How easy is it to cash out of cryptocurrencies?

Not as easy as you’d like. Bitcoin is not as liquid as other investments, in part because settlement can take more than a week, under good circumstances. Volatility and surging demand has caused frequent outages on exchanges such as Coinbase and Kraken, and you can’t sell if you can’t access your account. If such outages occur amid panic selling, some bitcoin holders might be unable to sell for a fairly long time, which could make steep losses worse as the price drops and people who want to sell, can’t. That’s one thing that could harm confidence in the asset.”(1)

Some Bitcoin History

“For those that are not already aware, Bitcoin uses the SHA-256 hash function, created by none other than the National Security Agency (NSA) and published by the National Institute for Standards and Technology (NIST).

Yes, that’s right, Bitcoin would not exist without the foundation built by the NSA. Not only this, but the entire concept for a system remarkably similar to bitcoin was published by the NSA way back in 1996 in a paper called “How To Make A Mint: The Cryptography Of Anonymous Electronic Cash.

The origins of bitcoin and thus the origins of crytpocurrencies and the blockchain ledger suggest anything other than a legitimate rebellion against the establishment framework and international financiers.

The truth is, the internet is also an establishment creation developed by DARPA, and as Edward Snowden exposed in his data dumps, the NSA has total information awareness and backdoor control over every aspect of web data.

Cryptocurrencies are built upon an establishment designed framework, and they are entirely dependent on an establishment created and controlled vehicle (the internet) in order to function and perpetuate trade.  How exactly is this “decentralization”, again?

Total Information Awareness

Total information awareness is the goal here; and blockchain technology helps the powers-that-be remove one of the last obstacles: private personal trade transactions.

Years ago, a common argument presented in favor of bitcoin was that it was “completely anonymous.”  Today, this is being proven more and more a lie. Even now, in the wake of open admissions by major bitcoin proponents that the system is NOT anonymous, people still claim anonymity is possible through various measures, but this has not proven to sway the FBI or IRS which have for years now been using resources such as Chainanalysis to track bitcoin users when they feel like doing so, including those users that have taken stringent measures to hide themselves.

Bitcoin proponents will argue that “new developments” and even new cryptocurrencies are solving this problem. Yet, this was the mantra back when bitcoin was first hitting the alternative media. It wasn’t a trustworthy assumption back then, so why would it be a trustworthy assumption now? The only proper assumption to make is that nothing digital is anonymous. Period.

Bitcoin Isn’t an Alternative to International or Central Banking

Cryptocurrencies like bitcoin are in no way a solution to combating the international and central banks.  In fact, cyrptocurrencies only seem to be expediting their plan for full spectrum digitization and the issuance of a global currency system.

Bitcoin could easily hit $100,000, but its “value” is truly irrelevant and consistently hyped as if it makes bitcoin self evident as a solution to globalism. The higher the bitcoin price goes, the more the bitcoin cult claims victory, yet the lack of intrinsic value never seems to cross their minds. They have Scrooge McDuck-like visions of swimming in a vault of virtual millions. They’ll only accuse you of being an “old fogey” that “does not understanding what the blockchain is.”

The fact is, they are the one’s that do not really understand what the blockchain is — a framework for a completely cashless society in which trade anonymity is dead and economic freedom is destroyed.”(2)

A Cashless Economy = Slavery

Make no mistake….the goal is to create a cashless economic system on a world wide basis.

“Despite the incredible penetration of credit and debit card transactions into economic aggregate, and the boom in internet shopping, few will comfortably admit that a cashless society is nearly upon us.

Over the years, futurists and commentators alike seemed to agree that a cashless society will be a slow creep, and would automatically phase itself in simply by virtue of the sheer volume of electronic transactions that gradually make cash less available and more costly to redeem, or exchange. This is still true for the most part. What few counted on, however, was how the final push would take place, and why. Some will be surprised by these new emerging mechanisms, and the political and sinister implications they ultimately lead to.

It has long been the dream of collectivists and technocratic elites to eliminate the semi-unregulated cash economy and black markets in order to maximize taxation and to fully control markets. If the cashless society is ushered in, they will have near complete control over the lives of individual people.

The financial collapse which began in 2007-2008 was merely the opening gambit of the elite criminal class, a mere warm-up for things to come. With the next collapse we may see a centrally controlled global digital currency gaining its final foothold. The cashless society is already here. The question now is how far will society allow it to penetrate and completely control each and every aspect of their day to day lives”(3)

“The central banks are planning drastic restrictions on cash itself. They see moving to electronic money will first eliminate the underground economy, but secondly, they believe it will even prevent a banking crisis. This idea of eliminating cash was first floated as the normal trial balloon to see how the people take it. It was first launched by Kenneth Rogoff of Harvard University and Willem Buiter, the chief economist at Citigroup. Their claims have been widely hailed and their papers are now the foundation for the new age of Economic Totalitarianism that confronts us. They sit in their lofty offices but do not have real world practical experience beyond theory.

Considerations of their arguments have shown how governments can seize all economic power and destroy cash in the process, eliminating all rights. Physical paper money provides the check against negative interest rates, for if they become too great, people will simply withdraw their funds and hoard cash. Furthermore, paper currency allows for bank runs. Eliminate paper currency and what you end up with is the elimination of the ability to demand to withdraw funds from a bank.

Paper currency is indeed the check against negative interest rates. We need only look to Switzerland to prove that theory. Any attempt to impose say a 5% negative interest rates (tax) would lead to an unimaginably massive flight into cash. This was already demonstrated recently by the example of Swiss pension funds, which withdrew their money from the bank in a big way and now store it in vaults in cash in order to escape the financial repression. People will act in their own self-interest and negative interest rates are likely to reduce the sales of government bonds and set off a bank run as long as paper money exists.

The only way to prevent such a global bank run would be the total prohibition of paper money.

The Financial Times argued last year that central banks would be the real winners from a cashless society:

Central bankers, after all, have had an explicit interest in introducing e-money from the moment the global financial crisis began…

The introduction of a cashless society empowers central banks greatly. A cashless society, after all, not only makes things like negative interest rates possible, it transfers absolute control of the money supply to the central bank, mostly by turning it into a universal banker that competes directly with private banks for public deposits. All digital deposits become base money.

If all money becomes digital, it would be much easier for the government to manipulate our accounts.

Indeed, numerous high-level NSA whistle-blowers say that NSA spying is about crushing dissent and blackmailing opponents … not stopping terrorism.

This may sound over-the-top … but remember, the government sometimes labels its critics as “terrorists.  If the government claims the power to indefinitely detain – or even assassinate – American citizens at the whim of the executive, don’t you think that government people would be willing to shut down, or withdraw a stiff “penalty” from a dissenter’s bank account?

If society becomes cashless, dissenters can’t hide cash.  All of their financial holdings would be vulnerable to an attack by the government.

This would be the ultimate form of control. Because – without access to money – people couldn’t resist, couldn’t hide and couldn’t escape.”(4)

The bottom line, as the corporate parasites say, is that a cashless economy equals slavery.

“Ask yourself this: Why is it that central banks around the world (including the BIS and IMF) are investing in Bitcoin and other crytpocurrencies while developing their own crypto systems based on a similar framework? Could it be that THIS infusion of capital and infrastructure from major banks is the most likely explanation for the incredible spike in the bitcoin market?  Why is it that globalist banking conglomerates like Goldman Sachs lavish blockchain technology with praise in their white papers? And, why are central bankers like Ben Bernanke speaking in favor of crypto at major cryptocurrency conferences if crypto is such a threat to central bank control?

Answer — because it is not a threat. 

They benefit from a cashless system, and liberty champions are helping to give it to them.

The Virtual Economy Breeds Weakness in Society

The virtual economy breeds weakness in society. It encourages a lack of tangible production. Instead of true producers, entrepreneurs and inventors, we have people scrambling to sell real world property in order to buy computing rigs capable of “mining” coins that do not really exist. That is to say, we may one day soon be faced with millions of citizens expending their labor and energy in order to obtain digital nothings programmed into existence and given artificial scarcity (for now).

Real change requires actions in the real world. Removing banking elitists and their structures by force if necessary (and this will probably be necessary). Instead, freedom activists are being convinced that they will never have to lift a finger to beat the bankers. All they have to do is buy and mine crypto. The day will come in the near future when the folks that embrace this nonsense will wake up and realize they have wasted their energies chasing a unicorn and are ill prepared to weather the economic reset that continues to evolve.

To maintain a real economy in which people are self reliant and safe from fiscal shock, you need three things: tangible localized and decentralized production, independent and decentralized trade networks that are not structured around an establishment controlled system (like the internet is controlled), and the will to apply force to protect and preserve that production and those networks. If you cannot manufacture a useful thing, repair a useful thing or teach a useful skill, then you are essentially useless in a real economy. If you do not have localized trade, you have nothing.  If you do not have the mindset and the community of independent people required to protect your local production, then you will not be able to keep the economy you have built.

This is the cold hard truth that crypto proponents do not want to discuss, and will dismiss outright as “archaic” or “not obtainable.” The virtual economy is so much easier, so much more enticing, so much more comfortable. Why risk anything or everything in a real world effort to build a concrete trade network in your own neighborhood or town? Why risk everything by promoting true decentralization through localized commodity-backed money and barter systems? Why risk everything by defending those systems when the establishment seeks to crush them? Why do this, when you can pretend you are a virtual hero wielding virtual weapons in a no risk rebellion in a world of electronic ones and zeros?

In truth, the virtual economy is not legitimate decentralization, it is a weapon of mass distraction engineered to kill legitimate decentralization.“(2)

The Bitcoin Mania Bubble

You may have heard the recent Bitcoin craze referred to as ‘Tulip Mania’.

If you’re not familiar with this reference, here’s an explanation:

“Tulip mania (Dutch: tulpenmanie) was a period in the Dutch Golden Age during which contract prices for some bulbs of the recently introduced and fashionable tulip reached extraordinarily high levels and then dramatically collapsed in February 1637. It is generally considered the first recorded speculative bubble.”(5)

“The price of a rare tulip bulb on the futures market in Amsterdam in January 1637 was equal to ten times the annual wage for a skilled crafts worker. A single bulb was reportedly exchanged for 1,000 pounds of cheese at the height of tulip mania. The market collapsed precipitously starting in February 1637, bottoming out in May 1637.

According to Economist Brian Dowd,

“By the height of the tulip and bulb craze in 1637, everyone.. rich and poor, aristocrats and plebes, even children had joined the party. Much of the trading was being done in bar rooms where alcohol was obviously involved…bulbs could change hands upwards of 10 times in one day. Prices skyrocketed… in 1637, increasing 1,100% in a month.”

Bitcoin, the original crypto-currency, was valued at $.08 in July 2010; $8100 on November 20, 2017, and $17,900 on Dec.15 2017. The sky is apparently the limit.

The danger, of course, is not just that at some point the bigger fools, the last purchasers of bit coin and the long term holders (“hodlers” in crypto-speak) will loose some or all of their money. That would be regrettable. But like straight forward pump and dump market manipulations of a stock, some will win while others loose.

But, as in 2007 and 2008, the creative greed behind global financialization is creating a bubble in Bitcoin and many other crypto-currencies, as investors pile into markets, just as they did in Holland in 1637. There is a real and, rapidly emerging threat that bit coin and its ilk could follow dynamics similar to mortgage back securities as the basis for highly leveraged and complex financial instruments, like credit default swaps that were traded in unlimited volumes with no limits based on the actual number of mortgages.

Cyrpto-currency has now entered the leveraged futures market. Speculators now can leverage futures purchases 15 to one. This means a 7% drop in the price of Bitcoin (a familiar phenomena) could wipe out an ‘investor’s’ capital, returning us quickly to the momentous margin calls of 2007-8. And there is no limit to the number of futures contracts.

Derivative instruments of more complexity and undefined risks are almost certain to swiftly appear as they did in 2007 when, for example, insurance giant AIG took enormous bets to earn premiums on credit default swaps on mortgage backed securities. And those securities were AAA rated. The sudden collapse of mortgage backed securities led to a liquidity crisis. The securities could not be sold for almost any price and the giant financial institutions on wrong side of the bets were suddenly bankrupt.

As Frances Coppola in Forbes points out,

“As more and more financial institutions with connections to the real economy pile into the cryptocurrency mania, the chances of a similar disastrous collapse rise ever higher, and along with it, the likelihood of Fed or even a government bailout.”

The intent of those driving the explosion of cryptocurrency prices is not a desire to use cryptocurrency as a low cost, reliable medium of exchange verified by a transparent block-chain, but as a magic carpet to wealth. If you’d bought $100 worth of bit coins in 2010, they would be worth $1.79 million as of Dec. 15. 2017. It is paradoxical that cryptocurrency, allegedly meant to free us from fiat currency, finds its liquidity and value in the all mighty dollar.

And Bitcoin is also seeing the transaction costs for Bitcoin transactions soar rising to $20 charged by block-chain “miners” whose computers verify transactions and at the same time create more blocks and produce more Bitcoins as part of the solution of the algorithm that verifies transactions. Far from being a means for very quick, cheap, anonymous financial transactions, bit coin is becoming slow and expensive to use.

By making cryptocurrency into an investment, which is part of a get rich quick scheme, as opposed to a free instrument of exchange and trade, it has become just another arrow in the quiver that’s making the rich richer and worsening the already grotesque distribution of income. Cryptocurrency speculation will make some people rich, as does day trading and house flipping, where many more will loose than win.

The Bitcoin and cryptocurrency bubble will not end well.”(6)

Cites:

(1) “What The Hell Is It?” – 74 Cryptocurrency Questions Answered

(2) Brandon Smith Warns ‘The Virtual Economy’ Is The End Of Freedom

(3) The Cashless Society Almost Here And With Some Very Sinister Implications

(4) Why the Powers That Be Are Pushing a Cashless Society

(5) Tulip Mania

Related Posts:

A Cashless Economy = Slavery

The Federal Reserve and the New World Order