Tuesday, June 30, 2015

Metablogging

This is more for me than it is for my readers, but I felt that my readers would benefit from this too, especially if some my readers are fellow bloggers. 

Since I’m nearly thirty posts into this blog, I thought I’d take some time to reflect on this blog, my writing style, and the challenges I face when writing on this blog. I’ll also give you a bit of info about me, too.
 

Cybermantics, The Blog

 

First, this blog. This blog is written with the purpose of celebrating my love of technology and my love of culture. I often intermix the two when writing to showcase just how technology influences culture, and the inverse, how culture influences technology. Before this blog, I was a fan of social commentary. This blog is the result of me combining my love of social commentary with my, more recent, love of technology.

I have a rather unfortunate habit of starting things and never finishing them. Over the years, I’ve learned to be a bit more patient with myself and my endeavors. My posting goal is to upload a post at no greater than five day intervals. I know I won’t always achieve this goal. I don’t believe in pushing myself to write and post every single day because I know, if I do, I’ll burn myself out. And that’s no fun. This isn’t a “for-profit” blog (despite the “donate” button). It’s a for-fun blog. If I’m not having fun, then why would I post? 

Writing Style 

 

I’m still fairly new to writing social commentary, and writing in general, so I constantly face challenges when writing about worthy subjects. I had one blog prior to this one. That blog was on a completely different subject matter though, so the transition to writing about technology was difficult to make since I also did a lot of social commentary on my previous blog. Though I’ve been writing for a little over a couple years, I’m still trying to find my writing style. I opt for a mixture of formal and informal, question and answer styles. I don’t mind being serious from time to time, and I love to be silly and facetious. 


Challenges

 

Some of the challenges I face are staying on topic (both staying in tune with the blog’s purpose and staying in tune to a particular post’s theme), formatting, clunky writing, consistent formatting, paragraph transitions, tagging my posts, and organizing the content of my posts. Perhaps the biggest of these challenges is staying on topic while keeping my post content reasonably organized. I’ve been known to branch off into irrelevant (or relevant, yet out of place) rants and ideas, but, I suppose that’s part of the fun and full-part of the blogging experience. 

One of the more irritating challenges I face while writing is figuring out how to end a post. It’s irritating to start off so strong, yet finish so weakly.
I’ve also been trying to strike a balance between simplicity and complexity. Most of my posts are rather short and simple, and thusly, read rather clunky. I’m a fan of concise posts. I’ve yet to figure out how to make my posts concise and readable!

Finally, a particularly irritating challenge for me as of late is knowing when to stop writing. Sometimes, I can keep writing, writing, writing forever. I’ll spend tons of time making a post the most perfect it can possibly be, and it’s difficult for me to know just when to stop. This is another reason why I set the goal of making one post at least every five days. When that five days is up, I post what I got no matter the quality. Usually the quality is more than sufficient.
And then there are times when I feel that I posted something prematurely, so then I worry about that. ARGH!!! Why is something like blogging so stressful! 


Signing Off

 

All of the subjects I write about are worthy of discussion (some worthier than others), which is why I write about them. I can only hope that my writing ability does sufficient justice to the topics I write about. One of my longer term goals for this blog is to post consistently for the length of a year. I’m happy to say that I’m over two months into that goal. I’ve got a long way to go and I don’t know if I’ll make it. But, if I don’t make it, I’ll put in a heroic effort before failing.

Some quirks and facts about my style/posts/posting habits:

  • I don’t like to make more than one post a day since, when I do, my blogging template “runs” the posts together and makes it seem like two posts are one giant post
  • I have a rather irritating habit of using the word “that” too much in my posts. I hope to change that…dammit!
  • I try to put in at least one hour of writing, per day. It doesn’t always happen, but it’s good to have goals
  • I use the words “machine” and “computer” / “articles” and “posts” / “conscious” and “ghost” interchangeably
  • I can’t figure out which font size I prefer
  • I’m terrible at citing my image sources. I would like for my readers to know that none of the images used belong to me.
  • I don’t usually make corrections to my past posts because, when I re-upload the post after the correction is performed, it screws with all the links that linked to that post. I’ve decided let past imperfections become beauty marks.

Monday, June 29, 2015

Cyberpunk Society and Our Society: Present and Future Drugs

thumb-291216582_953f924a70_oI’m drunk and I want to make a drunken post on, what is currently, a Monday night. Living the dream right here. That may sound sarcastic, but there are few things I love more than drunken writing. Anyway, as I’ve probed my mind for things to talk about, in my drunken stupor, it has occurred to me that I rarely touch upon drug use. Come to think of it, have I ever even mentioned drug use? I can’t think of a single instance in which I remarked on it. The fact that my blog was inspired by Cyberpunk fiction, and the fact that drug use is rampant in said fiction, makes it glaringly clear that something is missing from my blog discussions. How could I have neglected such a thing? After all, cyberpunk is “high-tech; low-life” and all low-lives have one thing in common, they like drugs. When your life is in the gutter, what better way to find that shred of happiness than through either biological augmentation or the bottle? It is a sin for this blog to exist while not having a single post about drug use, namely, the future of drug use. And that’s what I want to talk about here, the drugs of our present and our future, and the drugs that reflect our lives and our culture. But what do I mean when I say “the drugs that reflect our lives and our culture”? Let me explain.

Mainstream drugs reflect mainstream cultural issues and challenges. The primary drugs in use reflect the era. They are both a consequence and cause of the era or, more accurately, the culture of the era. The 60’s and 70’s, the era of free love, mysticism, and music, had LSD, marijuana, and Mescaline. The 80’s, the era of parties, late nights, and disco, had cocaine and amphetamines. The 90’s, the era of seeming hopeless and despair, had heroin.

And what are the drugs of the new millennium? Necessarily, these drugs must reflect the issues, culture, and challenges of the new millennium. These drugs are marijuana, alcohol, nicotine, caffeine, and Adderall (cocaine, too, makes the occasional appearance). Is it any coincidence that over half of these drugs are stimulants? Is anyone really surprised?

It seems every few weeks we are greeted with a “shocking” and salacious report about college students using stimulants like Adderall, Ritalin, and Vyvanse. Again, why are we so surprised? I think the better question is, why do people pretend to be so surprised? Stunningly enough, people who are put into a vice, day-in and day-out, look for a means to cope with that vice. Either they drown their worries in booze and party or, alternatively, they endeavor to beat their overscheduled lives by taking heavy-stimulants. The drugs we use reflect the culture we live in. And those drugs go on to shape our culture.

The modern era is a rush. Everyone rushes to get somewhere for some reason and for some end, only to find the final end in the ground. Humans are bombarded with information as never before in the history of the world. We’re overstimulated. We need to ingest booze or other depressants just to slow down for a moment of recovery. Our poor minds, in order to not overload, need to erect filters. Indeed, even with these filters, our minds are filled to burst with information. Never before have humans had so much information crammed into their heads, yet, the average person has so little control over that information.

Where am I going with all this? I’m about to introduce another, completely legal and widely distributed, drug offered in our present, by our media and our technology. That drug is information. A recent study suggested that people would rather stimulate themselves with electric shock than be left alone with their own thoughts. People would rather inflict pain on themselves than be disconnected from the world of information. They need the stimulation even if that stimulation is a painful electric shock, such is the degree of their addiction to information. Do the results of this study support my idea of information being the drug of the future? If there is still doubt, let’s take a look at the beast itself, social media.

Literally billions of people enjoy the fruits of social media websites and forums. Facebook, Twitter, Pintrest, Tumblr, Flickr, Youtube, Instagram, and, yes, even Blogger. And don’t forget all those “news” sites, magazines, “studies”, surveys, and articles that get linked to those social media sites. And all those comment sections and share “buttons” and up-votes/down-votes. All spreading, spreading, spreading. Sharing…sharing…sharing. Sharing…what? You guessed it, information! Did you get your information fix today by reading this post? Don’t shy away from it. Embrace it. Own up to it. I’ve got no problem admitting I have an addiction to information. It’s the drug of the future. Presented in a way no other drug has ever been distributed, virally by electric fire! Flashing images, gripping sounds, byte-size quotes to thesis-size rants. Information is intoxicating. And it’s all at our finger-tips, provided you have a computer of some sort. You can get your information fix at, pretty much, any time; just like the gentlemen in the image above.

I’ve said that the drugs society uses both reflect and reinforce mainstream culture. The information drug is the perfect supporting example of this claim. Our culture is all about that information. It’s all about being in-the-know. Being privy. Being knowledgeable. Being hip and savvy. So it isn’t shocking that information has become our go-to drug. And, for the second part of my claim, how does the information drug shape our culture? The information drug creates the addiction, an addiction which must be satisfied. How is it satisfied? More information is created, produced, and distributed. Yes, another vicious cycle. Our culture created the information drug and the information drug created more need for the information drug within the culture, which creates more sources of information. There are so many vicious cycles in this future of ours. How appropriate that most cyberpunk fiction takes place in dystopian societies. We have our own cyberpunk dystopian society filled with a drugged-out populace looking for their next information fix.

You may say the information drug isn’t unique to the 21st century; people have been craving the information drug since, at least, the 50s: the television era. This is true. In fact, the information drug has been on the market for at least as long as humans have existed. People have always craved information in one form or another. What makes the 21st century unique is that humanity now has the capacity to overload itself with the information drug, to form a serious addiction, and to propagate that addiction to no end.

Is there any hope for us information-riddled addicts? I think there is some hope even for the most craven and depraved of addicts. As the old saying goes, an ounce of prevention is worth a pound of cure. The best way to beat information addiction is to never become an addict. But what about those billions who are already addicted? If they are going to beat their addiction, they need to come to terms with the fact that they don’t need to be up-to-date on the latest news, “study”, gossip, or information. They need to take a step back, take a deep breath, and just let go.

It’s important to note that some people are more addicted than others. I’m addicted, but not nearly to the degree as the average teenage girl or the average Facebook user. I don’t seek help for my addiction because I can control it within reason. I even enjoy my addiction, as far as an addiction can be enjoyed. I understand that I am the one in charge of my life and I accept responsibility for my addiction.

In our society, this addiction can’t really be avoided, but it can be easily controlled with a little intelligence and a little willpower. So, the one question that remains is, “Will you accept responsibility for your own addiction?”

 

Thank you for choosing this blog to satisfy your information cravings.

Information: Please Learn Responsibly.

Wednesday, June 24, 2015

No Knowledge of Computers = No Future?

It’s funny how thoughts and memories from the past occasionally pop up in your head without any warning. Recently, I remembered a conversation with a coworker about my intent to learn more about computing. After I told him this, he said something which I found to be remarkable at the time, but even more remarkable now. He said, “If you don’t have knowledge of computers, you don’t have a future.” This is simply a paraphrase, but I think I captured the spirit of what he said.

As I mention in many of my posts, and it’s likely my readers are tired of hearing me say it, computers are ubiquitous, pervasive, and, even, invasive. This seems like common knowledge, and it is common knowledge, but how often do people think of the repercussions of this commonly known fact? Computers are everywhere and they are heavily integrated in most workplaces, organizations, homes, etc. Computers run our society’s infrastructure, our entertainment, our jobs, our…everything. If you are averse to learning how to operate a computer, then you are quite limited in our society.

The average person, if he is to have at least some success in his working career and life in general, must have at least a rudimentary understanding of how to use a computer. User-interfaces, however, take much of the burden off the average person, which is a good thing, to an extent.

So, I have to agree with my coworker’s sentiment. If you are to operate a computer, you need to have some knowledge of it. And, since computers increasingly rule our world, not knowing how to use a computer means you don’t have much of a future.

 

After digging through my memory banks a bit more, more of that conversation resurfaced. I think I may have misinterpreted what my coworker said because I simply misremembered the situation. But, I’ll leave the above paragraphs in this post because they are important and, more importantly, I like them.

Now, after further reflection, my coworker made his remark after I informed him about my interest in learning about programming. Perhaps, by his comment, he meant that if a person doesn’t have knowledge of programming, or other, more specialized, computer knowledge, then, that person doesn’t have much of a future.

Well now, that seems to be more open to debate.

So, how about it? Does a person who doesn’t have a specialized knowledge of computers have a future? I’ll go with “yes” on this one. A person who doesn’t have special knowledge of computers still has a future. Today, a person doesn’t need to have specialized knowledge of computers in order to use a computer. Thanks to all those user-friendly interfaces, a technologically-illiterate person can survive in the future just as well as the technological know-it-all next-door. Now, the technological know-it-all may have an edge on the illiterate, however, the illiterate can still survive and thrive (likely thanks to the works of the know-it-alls). And, to be fair, the tech-geeks do have better career prospects than most folks, but this doesn’t mean that only the tech-geek will flourish in the future.

The future has a place for us all. Though, for the technologically-illiterate, that place may not be as glamorous as the place occupied by the know-it-alls. So there may be some truth to my coworker’s sentiment. I guess the lesson to take away from all this is that, if you want to live the high-life in the future, then get to working on the foundations of that technical knowledge right now. Your future will shine in proportion to the amount of work you spend polishing your tech-skills, today. Get to it.

Tuesday, June 23, 2015

Technology as a Fashion Accessory and The Technology/Culture Feedback Cycle

Bluetooth-Bracelet_03In It’s Cool to be a Geek? Technology Influencing the Culture of Cool, I remarked on how smartphones are cool and, if you possessed a smartphone, your cool factor got a bit of a boost. And, because I am so kind, I even gave a bit of an explanation as to why smartphones are seen as cool. In summary, smartphones are cool because they symbolize wealth, hipness, and tech know-how (which is no longer considered “geeky”, at least not in the negative sense of the word). The geek is vindicated and his world is now a fashion accessory. This may be cause for a bit of resentment and weeping. How would you like it if something that you cared for deeply (say, technology), something that once meant social ostracism, now became something that merited acceptance and accolade? Not only that, but to have your passion be reduced to mere fashion accessory by mainstream society? Kind of a kick in the teeth if you ask me. But let’s look on the bright side of all this. This means the geek’s passions are no longer disparaged, at least, not as as they once were. It also means businesses are striving to produce even greater, more innovative, tech. So, fear not geeks, there are pluses to this fashion accessory trend. So, let’s take the time to examine this trend a bit more.

In the named post above, I observed that some technology was becoming more of a fashion accessory. Here, I’ll speculate that this trend will continue into our future. Today, technology isn’t just something you use. It’s also something you wear. Something you showoff. Something you aren’t afraid to take out in public, like the wrist calculator of yesteryear. This is an interesting, and recent trend within the tech industry.

I’m not the first to notice this trend, but it’s something that deserves to be pointed out over and over again.

Why do I think this technology as fashion accessory trend will continue into the future? This speculation is based on what I perceive to be a cause of technology as fashion accessory. That cause being the growing pervasiveness of tech in modern society. I riff on this chord quite a lot, almost to a sickening degree. But it’s incredibly important to understand how technology influences our culture and how our culture influences technology. This technology as fashion accessory trend is just a symptom of a larger phenomenon.

The above paragraph is a bit confusing to read (and this is largely due to my inability to write), so I want to better explain it here, with a flowchart!

image

I just knew I’d find a use for Visio 2013. Hopefully this flowchart illustrates what I mean when I say “technology influences cultures, culture influences technology”. I also threw in another factor that contributes to the presence of technology in pop-culture: profit. If businesses make money by introducing tech-themed gadgets (see the bracelet above) or technology, in general, into the culture, then you will see more tech introduced into pop-culture. If you want to see more tech in mainstream society, then vote with your wallet. Buy the gadgets, gizmos, and other tech produced by businesses. Send a message with your dollar that you want to see more tech in society.

Something I want to clear up about the flowchart is the arrow that points from the “influence” circle to the “Technology advances far beyond…” parallelogram. It’s true, as the arrow suggests, that pop-culture supports the advancement of technology. However, pop-culture also directs the advancement of technology. For example, think about the Apple iPhone, would it look the way it does (sleek, slim, and simple), if pop-culture (or, rather, the makers of that culture: mainstream society) didn’t want it to look that way? Again, culture influences tech and tech influences culture. That can’t be repeated enough. We’re stuck in a perpetual feedback cycle. Which isn’t necessarily a bad thing.

What does the future hold for us? My guess: more of the same.

I probably could have broken this post up into two separate posts, but I was on such a roll that I couldn’t help but relate the technology as fashion trend to the tech/culture feedback cycle. Oh well, such is life.

Monday, June 22, 2015

Will Machines Gain Rights?

3272462_origIn Programming a Ghost, I briefly explored the possibility of a computer developing, or being programmed with, a ghost or, more simply, a conscious. In that post, my focus was primarily on the possibility of a machine obtaining a ghost, how we would know whether a machine had a ghost, and what it would take to program a ghost. I also briefly mentioned that there would be enormous political, social, and economic repercussions if machines were to become conscious. Now’s the time to explore just one of those repercussions: machine rights.

I think it’s likely humanity will see serious “machine rights” movements within the next ten years. The debate has already started. And talk about the relationship between humans and machines is gaining political undertones. And why wouldn’t it? Machines are now closer to exhibiting human thought patterns than ever before.

The current understanding of rights is that, to have rights, something must possess the mental faculties of the one rights bearing organism on the planet: a human. If something possesses human thought patterns, or at least appears to possess human thought patterns, then how can someone argue that humans should have rights while machines should not have rights? The arguer would need to either disprove that the robot is thinking (how would that be possible?) or use another standard to determine whether something has rights. Would the new standard be the human soul? Perhaps “No Soul, No Vote” will become a popular political slogan in our near future. But, then, how would a person know whether a machine has or doesn’t have a soul? But this wouldn’t matter much to politicians, since politics has never been about honesty or intellectual debate. Politics is about political expediency and interest. What political candidate is going to run on a platform that seeks to give machines rights and, thus, lawfully prohibit people from interacting with machines against the machine’s “will”?

I don’t like the idea of having to ask my computer whether I can type on its keyboard or use its programs. But whether I like the idea of machine rights or not is not really relevant. I could dislike the idea of some person having rights, but this isn’t reason enough to strip someone of their rights.

While I don’t believe politicians, at least not mainstream politicians, will adopt machine rights as a political goal, I do believe the debate for machine rights will gain some steam in the coming years. It will be a slow process, but it will happen.

And what would happen if machines did gain rights? What machines would gain rights? Desktop computers, laptops, smartphones, infrastructure machines? Surely, if we are using the old standard of rights, the only machines that would gain rights would be those machines that exhibit human intelligence and reason to the same extent as an adult human. But, wouldn’t “lesser minded” machines also have some rights of their own? After all, children and the mentally retarded still have rights, though not the same set of rights as their adult, mentally-advanced peers.

Additionally, if machines were to gain rights, humans, since we depend upon machines, would need to develop machines that are safely below human intelligence and reason so that we could still use them without the long-arm of the law coming down upon our heads. But, wouldn’t machine activists simply argue that we are still abusing mentally-handicapped machines for our own ends? It could happen.

I don’t think machines will gain rights in my lifetime and I expect to live, at least, another sixty years. Remember, politics is about political expedience and interest so no popular politician, in his right mind, would advocate machine rights if giving machines rights would mean forbidding people from using them, unless the machines consented to being used.

We’re in for some interesting times.

Sunday, June 21, 2015

Is the User an Idiot?

Funny-Idiot-Quotes-Wallpaper-HDOnce upon a time, in a distant wondrous past known today as the 80’s and 90’s, computers were complicated and, often, difficult-to-use machines. It would take a special group of people, known then and today as the geek, to master and use them effectively. Computers weren’t for everybody. And not everybody could afford them and not everybody had the time to learn how to use them. And, to be fair, computers back then didn’t have nearly the functionality of the computers today, so there wasn’t much point in learning how to use a computer, if you were the average person. But, then, in the late 90’s spanning into the early 2000’s, computers became a very useful thing, even to the average person. However, the computer was still a complex thing that the average person still couldn’t understand with ease. Enter “user-friendly” machines and interfaces. Say good-bye to the command line interface and say hello to the shiny, new graphical user interfaces (GUIs). Complicated and complex is out and simple is in. But at what cost? I’ll get back to this later. First, a little history.

Needless to say, the new found usefulness of computers and the new, easy-to-use interfaces, made the computer accessible to just about anyone. Now, computers were for everybody because everybody had use for them and could use them. As time went on, into the 2000’s, computers were gaining functions and abilities that could have only been imagined by a few geeks in the 80’s and early 90’s. But as these new functions came into play, they necessarily made the computer more complicated, so even easier, user-friendly, interfaces needed to be developed. In the early 2000’s entire industries committed to the development of “user-friendly” interfaces sprang from the void. The term “user-friendly” became mainstream and a bedrock within the tech industry. Everything needed to be user-friendly and ergonomic: the operating system, the applications, websites, interfaces, the hardware/software/firmware, the plug-in-play hardware, even the computer chair. If a product was user-friendly and easy to understand, it gave the manufacturer a serious edge over the competition.

Of course, “user-friendliness” wasn’t invented by the tech industry, however, it can be argued that the tech industry took it to its fullest implementation. If a new product isn’t user-friendly, then it’s shelved or redeveloped until it is user-friendly.

Just how user-friendly can things get? How far does the user-friendly rabbit hole go? I’ll tell you how far it goes. One day, there won’t be a user-interface. One day, the user can just say “Do my taxes” and the taxes will be done. The user won’t even need to speak in coherent sentences. He could say, “OJFWOojfoewjofw ojwaf o”. And the computer would do its best to figure out just what the hell the user wants and then perform the action in record time. I feel for machines, I really do.

There is, of course, a serious problem with all this “user-friendly” emphasis. The technology has become too damn “friendly”! I think user-friendliness has its place. After all, in the end,  the end-user is a human, not a machine. And the average human doesn’t have the time, or doesn’t have the interest, to learn the ins-and-outs of a computer, on a basic level. But, as I mentioned, all of this user-friendliness has a cost.

This is where I explain the first image in this post. It looks familiar, doesn’t it? Doesn’t the text capitalization style seem to resemble the style used on a rather famous product line? Don’t worry if you don’t see it at first, I didn’t either. Here’s a picture to clear things up.

ifixit_appleipod_5gt_01

Yes, you guessed it. Apple is the prime culprit in making things too user-friendly. But what do I mean when I say something is too “user-friendly” and how could such a thing possibly have a downside? There are, in fact, trade-offs when it comes to making a product user-friendly. These trade-offs can be circumvented, however, but it would require far more effort and all that effort is already being thrown into making the product the most user-friendly thing on the planet. So, just what are these trade-offs? The trade-offs are low access to the backend of a product and the general dumbing down of the population.

Cut off from the Back End

There are few joys that are comparable to seeing the inner-workings of a technological miracle like the computer. But, what do I mean by “cut off from the back end”? I mean the gates to the inner-workings of the computer have been shut tight for fear that the user will, somehow, mess up the device (which is a possibility). Apple is a prime culprit of this crime (along with most smartphone manufacturers and Microsoft in their Windows Vista years). Seriously, in most smart phones, the user can’t even access the file system, let alone the deeper levels of the device. Does this seem like a strange complaint? I mean, why would anyone want to access the lower-levels of an operating system? That’s for those tech geeks and tech-support guys out there. Well, I like having the opportunity to augment certain functions of my computer that can’t be changed unless I have access to the computers registry files. There’s a psychological consequence to this, as well. When I can’t access the basic layers of my machine, it feels as though my machine isn’t really mine. I feel more like a simple user rather than the owner of the machine, an outsider rather than an insider. And I don’t like feeling like an outsider when using my beloved machines.

Dumbed Down Population

The famous philosopher/sociologist Herbert Spencer once said The ultimate result of shielding men from the effects of folly, is to fill the world with fools. If this can be applied to this user-friendly trend, then it may be said that making everything so damn user-friendly has effectively dumbed-down the population. People don’t really have to think when it comes to technology. Just push a button and you’re off to the races. What incentive do people have to think about their technology when their technology can, effectively, think for them? The technology, sometimes, even goes so far as to treat the user as though the user is an idiot. And, in turn, the user kind of becomes an idiot or, more politely, an ignorant. Not a bad deal for the technocrats, though.

 

I’m pretty harsh on user-friendliness, but I’m keeping it within reason. I have admitted that user-friendliness does have its place since people are people, not machines. Additionally, keep in mind that user-friendliness depends on the user. The geeks may pine for the glory days of MS-DOS. But the MS-DOS command prompt was user-friendly. Though, I think we can all agree that things have become incredibly user-friendly, to a fault.

windows-command-prompt

Perhaps we can strike a balance between user-friendliness and accessibility to the machine. It’s been noted that the reason businesses make computers so easy to use is so they can have that competitive edge in their respective market. And it’s been noted that most people desire the benefits offered by the computer, but can’t particularly be bothered to learn about the computer’s inner-workings. I’m not bashing those folks who just want the goods without understanding how a computer dishes out those goods. We can’t all be tech geeks and we don’t all have time to understand how to use the command line. I’m just going to leave them be. They have their lives and I have my geeky life. Sometimes, though, I just wish people would take the time to learn a little about the machine on which they depend so much. I’m not asking for the moon here.

Finally, there are alternative operating systems out there for those geeks looking for both accessibility and a challenge. I’m looking at you GNU. And that’s enough for me.

Saturday, June 20, 2015

It’s Cool to be a Geek? Technology Influencing the Culture of Cool

geek-fashion-24When exactly did it become cool to be a geek? No really, help me out; I can’t pinpoint the exact time it became cool to be a geek. It kind of just snuck up on us, didn’t it? A slow, gradual, unnoticeable process that sought to lift up the lowly geek from his cradle of public ostracism. But it’s here, no doubt about it. Cool geeky action heroes star in mainstream movies, digital watches are chic, tech know-how no longer means you’re a loser, science-related Facebook pages have millions of followers, and “geek fashion” is trendy. I’ve got a theory as to why the geek has made a comeback, but, like all theories, it’s only a collection of observations and assumptions, and it has some holes. It’s not perfect, but perhaps it will help explain why the geek is the new king.

But is the geek truly the new king or am I simply losing my mind? Let’s make some observations.

First, let’s look at fashion. When I look at “geek fashion”, the primary thought that flashes in my mind is: What’s going on? How did dressing like a geek become fashionable? Fashion has always been the domain of the cool and hip, and now that geek-wear is accepted and cool, what does that say about society’s opinion of the geek? The fashionable parts of society have welcomed the geek with open arms. Is this “Geek is the New Sexy” fashion movement simply an offshoot of the hipster’s ironic fashion? Is there something more to this? I think there is, but let’s look at the action heroes next.

geek-fashion-menShia Laboef is a prime example of how the geek is gaining momentum. Years ago, in the 80’s and 90’s, the geek was heavily stereotyped in cinema. At worst, he was made a target of ridicule. At best, an object of sympathy. Only occasionally was the geek the star of the show (see Revenge of the Nerds), but, even then, he was heavily lambasted for exhibiting the stereotypical geeky traits. But, now, Shia Laboef is the hero. Albeit a mostly reluctant hero. But, in the end, he still gets the girl (a standard trope of the hero within cinema). Additionally, look at the romanticizaton in cinema of king geek, the hacker, who is cool in his own way, but he too benefits from and adds to the geek’s new, cool image. Next, let’s look at cool technology.

In the 80’s and 90’s, you would be publicly ostracized if you were caught wearing a wrist calculator. The wrist calculator was the scarlet letter of the geek and all the reason anyone needed to poke-fun, at best, or humiliate, at worst, the geek. In 2015, if you don’t have the latest tech and the latest, hottest, smartphone, you’re cool status is called into question. What trendy teenager would be caught dead without their beloved, sleek, slim, sexy, cool smartphone? And not just teens. Everyone loves their phones, but it seems that teens and young adults in particular use their phones to boost their cool factor. Indeed, we are looking at a relatively new trend in technology. New technology needs to not only to function properly, like in the good ol’ days of the 80’s and 90’s, it also needs to look cool. Next, those science-related Facebook page and Youtube channels. 

7897897

Some of these Facebook pages, like IFLS, command the attention of millions of people. Numberphile, a channel that analyzes numerical and mathematical theories and relates them to practical matters, has over 1.2 million subscribers on Youtube. How the hell did such geeky subjects gain cultural momentum? In the 80’s and 90’s, mathematics and science were niche subjects that were only studied in university classrooms or under cover of night. These days, if you haven’t seen the latest post on IFLS, you’ve left the in crowd. All the cool kids love IFLS.

Perhaps I’m looking at this all wrong. Maybe geekiness never became cool. Maybe what became cool was the revised idea of the geek. Historically, the image of the geek involved bad skin; messy, dirty hair; a bad facial complexion (think acne); and social awkwardness. What does the new image of the geek look like? It sure as hell doesn’t look like me. The geek’s new image looks like a handsome, muscular guy with good skin, presumable social grace, and clean hair. How do we know he’s a geek? He’s also wearing horn-rimmed glasses.

Maybe I’m just looking at that wrong, because I just described a fashion model modeling “geek fashion”. The fashion industry both reflects and influences culture, so it may be safe to assume that something had to cause the fashion industry to take up the geek banner.

Still, maybe Shia Laboef is a fluke. Maybe it’s all just a fad. Maybe smartphones are seen as cool because wealth equals cool. Maybe Facebook pages are only cool to certain niches, albeit, large niches and have other appeal aside from their technical subject matter. Maybe there is more to the above observations. I suspect there is more in the works that influence these phenomenon, but I’ll just look at just one of the trends at work. I’ll argue that there is a serious, overriding, trend in the works. What is this trend? The lessening and removal of the geek stigma; it is the vindication of the master of tech, the geek.

But, why? How did this trend come about? Why has the geek made such a comeback? What made the fashion industry take up the geek banner? Here’s my imperfect theory.

Given that technology has become increasingly prevalent and, even, invasive, it only makes sense that those folks who understand it all, the geeks, share some part in the fame and coolness that new tech radiates. But, there is a problem with this. Why didn’t this happen in an earlier time, like the 90’s, when technology was just as new? Well, it was new back then, so why wasn’t the geek romanticized, then? My only explanation is that the technology back then, while revolutionary and becoming more common, was simply not as common as it is today. It was “in” people’s lives, but most people didn’t notice it. It just wasn’t as obvious, and in-your-face, as today’s tech.

All of this obvious tech that, quite frankly, rules our lives more than ever has had more than its fair chance to influence the culture. Tech influences the culture and the culture influences the tech (see iPhone culture for proof of that). In the past, the geek was a mere hobbyist whose hobbies didn’t impact much of anything. Today, his skills and abilities have made him a veritable technocrat, and he, thusly, enjoys a golden age that most other subcultures can only hope to catch a glimpse of.

It’s a good time to be a geek. As long predicted, the meek have inherited the earth. The culture of cool is only a reflection of this fact.

Friday, June 19, 2015

I Wanna be Anonymous and I ain’t the Only One!!!

downloadNo, I’m not talking about the hacker group “Anonymous”; I’m talking about good ol’ anonymity. Have you ever searched your name using Google? I do on occasion and I’m usually surprised by what I find. Odd little tidbits of information about my life and my history always seem to leak onto the Internet, in some way, at some time. It’s really terrifying that anyone who knows my full name can just perform a Google search to discover who I am. And I have a unique name, so I don’t have the luxury of being buried beneath hundreds of similar names. I’m not the only one worried about my information getting into the wrong hands. Some folks are willing to go to extremes in order to hide their online activities and I can’t really blame them. Anonymity, once something that was relatively easy to have, even on the Internet, is now a fleeting thing in our surveillance age. But, you may ask, “why is anonymity so precious and so prized?” That’s an excellent question. Let’s begin.

Anonymity is precious, but why? Why would anyone care that there information is put on the web for all to see? Okay, maybe people don’t want to have their credit card information or social security numbers readily available to the world. But why would anyone care if random people knew their birthday? After all, it could mean gifts! Well, even seemingly innocent information can have its price. Information links to information. Profiles can be made. A birthday here, a few likes and dislikes there, and a serious profile of who a particular person is can be made. Still, why does this matter? Have any enemies? Perhaps you don’t have any enemies now, but you will have some in the future. And you don’t want any enemies grabbing hold of your information even if the information is nothing more than your likes and dislikes. You don’t want past information to haunt you in the future.

Still not convinced? There’s a rather famous quote, which I just can’t find right now, by a king(?). He said, to paraphrase, “Allow a man to speak twenty words, and I’ll find a way to hang him seven times.” The fact is, there are people out there in influential positions who may not like what you have to say on the Internet. What you say could get you fired, and it could even prevent you from getting a job in the first place. In America, you (ideally) can’t be imprisoned for saying “offensive” things. Unfortunately, the first amendment doesn’t protect you from losing your job over poor word-choice. Perhaps, one day, the first amendment will no longer protect someone from being sent to prison for saying some words. Perhaps that day is already here. “Hate speech” laws are currently used to lengthen a person’s prison sentence. It’s probably only a matter of time before those laws can be used to send someone to jail. Terrifying to think about, but it is a possibility in America and a reality in other countries.

Here’s another, more trivial reason, why anonymity is important. When I say “trivial”, I mean it only in a relative sense of the word. Remember when I spoke of “building a profile” on someone? Profiles can be built and used in social engineering attacks. I once had a boss who was targeted by a couple of social engineers. The only thing they needed was the location of his Aikido dojo and, from there, they were able to grab information about his workplace, finances, methods of payment, his schedule, and, even, his personality. Information can be used to grab more information, so it’s best to limit the information you throw out into the world.

Another reason to remain anonymous on the net is targeted ads. Now this will seem really trivial (and not in the relative sense of the word), but is it trivial? After all, I’m sure some people love to have advertisements that match their interests targeted at them. It makes shopping much more convenient. But doesn’t companies having that kind of information about you pose some kind of threat to you? How businesses use that information really isn’t up to you. They may sell it, exploit it, or discard it on a whim.

Indeed, in recent years, Facebook, Google, and Amazon have become far more information hungry. They want more information about you than they once did. Real names are now required by Facebook and Google. Aliases will no longer be accepted. What about security questions? Aren’t those used just for user authentication? Think about what those questions ask: where were you born, what is your mother’s maiden name, where was your first job? Sounds to me like these popular websites want to build profiles on their users.

Still not convinced of anonymity's importance? How about a little social proof. In recent years, in response to information grabbing organizations and a renewed desire for privacy, new apps and services have sprung up from the void. DuckDuckGo, Whisper, Secret, Local Anonymous, proxy services, virtual privacy networks, the list goes on. As to whether these apps and services provide perfect anonymity is another issue. However, these apps and services rose out of a growing need people have. The need to keep there personal lives and information private, and the need to voice their honest opinions about controversial subjects.

All this considered, if you want to have some semblance of a life, you necessarily need to sacrifice some anonymity. They don’t allow anonymous people to sign up for bank accounts or to get hired at a job. Necessarily, you’ll need to give away some information. Does this mean that the treasures of anonymity are forever outside our reach? No, you can do a great deal to remain relatively anonymous. Anonymity, too, is a relative term. And there are relative degrees of anonymity. Additionally, you can be fairly anonymous on the Internet while not being anonymous in the company of friends and family.

I haven’t gone to extremes to protect my identity either. I’ve protected my information well enough to suit my purposes. Some things can be found out about me such as my name and whatever is put on this blog (good luck attaching my real name to this blog, suckers!), but that’s about it. Nothing too damning. And, like I mentioned before, perfect anonymity isn’t really achievable if you want to have some semblance of a life.

So, how do you remain fairly anonymous? It seems rather simple, and I’ve given some advice in the paragraphs above, but I want to give some explicit advice here. Most of this will seem obvious.

  • When using web forums, don’t use your real name (use a cool sounding alias like L337_hacker or Bonobo the Chimpanzee)
  • Use a throw-away email account
  • Never send information to untrusted websites
  • Use a proxy server when surfing the web
  • Make sure the information you send over communication channels is encrypted
  • Don’t give out your name, address, phone number, email address, backing information, etc. unless absolutely necessary
  • And, above all, don’t put any more information, than necessary, online

How much anonymity should a person strive for? That’s up to you. I’m doing a great deal more these days to preserve my anonymity on the Internet and so are other folks. Still, more can still be done to protect ourselves from the electric eye.

Monday, June 15, 2015

Expanding on the God-Machine

This is more of an explanation of a past concept than a full-fledged post, but I felt this explanation was needed. In Supercomputers: The Deity just Waiting to Happen, I speculated on the emergence of a God-machine, but I don’t think I was being clear on what I meant by “God-machine”.  I mentioned some criteria that would need to be met before a computer could be called a God-machine. These criteria were that it would need to have the processing power of a supercomputer AND have sovereignty over a large, networked group of computers. I think I should also mention that to be considered a God-machine, the computer must also possess some features of a God. Seems reasonable, right? This God-machine would need to be both omniscient and omnipotent or achieve these qualities to some degree. Now, a computer can’t really be omnipotent and it can’t really be omniscient, but it can get pretty close to both “ideals” provided it has certain characteristics.

First, let’s discuss omnipotence. A supercomputer may not be able to achieve omnipotence, but it could, potentially, get close if the supercomputer had domain over a large network of infrastructure-related computers. That is, computers that control the electrical grid, water/sewer systems, gas/coal/nuclear power plants, and data centers. I think if a supercomputer had domain over these computers, it would be pretty damn close to omnipotence. It would possess omnipotence over the continuance of civilization as we know it and it could throw us back into the dark ages by simply shutting down the electrical grid. I hope future God-machines don’t have a bone to pick with humanity.

Now, omniscience. I think a supercomputer could get closer to omniscience than it could to omnipotence. Since a God-machine would necessarily have a lot of processing power and a lot of data, given to it by its extensive networks, it could become indistinguishable from an all-knowing deity. It may even be able to predict events based on probability and its extensive data stores.

A final qualification that I haven’t explicitly listed, though I have suggested it, is that (to be considered a God-machine) a supercomputer must work independently of human guidance. Of course, the supercomputer would need guidance in its more formative stages, however, later on, it shouldn’t need any. It should be self-sufficient, command-wise, and self-driven. It would almost have a ghost or, at least, a half-ghost.

Let’s break this down. A supercomputer must have these characteristics if it is to be considered a God-machine:

  • Pseudo-omnipotence
  • Pseudo-omniscience
  • Independence from human guidance

This turned into a greater post than I had initially anticipated. Hopefully it has cleared up a few things about my “God-machine” concept. The one question that still remains is whether a God-machine is possible. Like the previous article, I’ll leave that up to speculation.

Saturday, June 13, 2015

The Joy of a First Computer

Sooner or later, I’m going to give away my age. It will probably be in this post, but I won’t give it to you explicitly. You’ll have to work for it. Anyway, does anyone aside from me remember their first computer? I remember my first computer fondly. I remember walking down the stairs of my house into my family’s wood paneled, shag carpeted, basement. I remember the sight of a salesman (or was he a technician?) proudly presenting a detailed image of a mountain lion on a strange-looking box thing. Was it a television? No, there wasn’t a remote and you couldn’t change the channel. Was it a canvas? No, it was pixelated and flashing. Was it a new kind of radio? Possibly, but who had ever heard of a radio with a television screen built into it? If you haven’t already guessed, this was the first time I had seen a computer. And I was mystified. After a while, the computer migrated upstairs and was placed in an empty room, near my brother’s bedroom and family bathroom on the north side of my house. It was placed on the floor. I soon discovered that this machine had something to offer outside animal pictures, videogames. So, I played. The one game I remember playing for hours and hours was a game called Mahjong (I don’t think the computer had any other games installed, no Internet either). I’d lay on the shag carpeting (seriously, it was all over my house, my house never left the 70’s until the early 2000’s) and play until my elbows were sufficiently carpet-burned.

Today, I wish I had done more than play games on the computer, but my interest in computing didn’t really spark into I was in my early 20’s.

I’ve searched long and hard for the exact model of the computer I so fondly remember. I can only go by its brand and by my memory of what it looked like. I think I’ve discovered the model.

PackardBellSynera406

I think the computer in the image above was the computer of my youth. According the image description, this is the Packard Bell Synera series. I can’t find much on this model when performing a Google search. Perhaps this isn’t the computer of my youth. Perhaps I’ll post an advertisement on Craigslist: “Wanted, memories of old computer.” I’m sure someone will respond, it is Craigslist after all.

The particular computer model doesn’t matter (though, I’d still like to know what that damn model is for certain). What matters is the feelings and memories my first computer gave me. As I mentioned, my love of computing, and technology in general, didn’t really sprout until I was in my early 20’s. However, it was with this computer, my first computer, that the seeds of this love were planted. Everything has its origins, and my love’s origin was embedded in that ancient machine.

Friday, June 12, 2015

The Old Internet of 1995…A Simpler, More Exciting Time

Mosaic-Web-BrowserEvery once in a while, I find myself yearning for a not-so-distant past. The past I yearn for is not the 50’s 60’s, 70’s, or 80’s. It’s the 90’s. Specifically, the mid-90’s. Why do I yearn for this year? Well, as I mentioned in an earlier post, 1995 was the year the Internet “opened” for public use. It was also a year in which a fair amount of people had access to the Internet. Not a whole lot of people, but an amount large enough to form something of a community. In 1995, the Internet was a small community populated largely by geeks, since geeks were one of the few groups in America who possessed enough inclination to actually buy and learn how to use a computer.

The Internet back in 1995 had a strange feel to it. At least, I think it did, but I’m looking at all this in hindsight. Anyway, the feel was that of an unexplored wilderness where anything was possible. The Internet was a wondrous, mysterious, and even dangerous place. It possessed the qualities I now associate with the deep web. In contrast, the Internet of today feels like a sterile office space where everything is cordoned off into its respective spaces. There’s no more mystery. Everything seems to be a known, the unknown has been all but vanquished (thankfully the unknown of the deep web is still around). So, it seems that I don’t yearn for the past so much as I yearn for that feeling of mystery, excitement, and unexplored horizons. After all, I don’t yearn for the dial-up speeds or the baseband connection. I’m glad those are gone. Though, I do feel a pang of nostalgia whenever I hear the AOL tone signaling an attempted connection to the Internet. If you’re older than 20 years of age, you know what I’m talking about.

As I mentioned above, I’m looking at this from hindsight. To add another perspective, let’s take a look at the Internet from the point-of-view of a geek from 1995. Did he have the same feeling I get whenever I remember the Internet in 1995? Probably not, or maybe he did experience the same feeling of mystery and excitement. After all, the Internet was “new” back then. Or, perhaps, the geek in 1995 yearned for the ARPANET of 1985 and before. Maybe the geek in 1995 saw his Internet as I see my, 2015, Internet: as a boring, commercial, sterile wasteland. The Internet was commercial back in 1995 and, while not as commercial as today, would have seemed as commercial to the geek in 1995 who only had the ARPANET for comparison. We all seem to long for those glory days that, perhaps, weren’t as great as we believe. 

I guess it’s all in a person’s perspective. Perhaps the Internet will morph into an even larger sterile, commercial environment 10 years from now, and I’ll long for the glory days of 2015. Heh, it’s a funny thought.

Ten years ago, I couldn’t imagine the Internet getting much bigger. Over those ten years, I’ve witnessed it grow. And right now, I can’t imagine the Internet getting much bigger. Perhaps I ought to learn from my past misconceptions about the Internet. The Internet is a constantly shifting, moving, expanding, growing life-form. It’s difficult to say in advance what will become of this life-form. I think the one sure thing about the Internet is growth. It will grow and grow and grow with no clear end in sight, barring a nuclear war.

With all this in mind, let’s keep an open mind about our current Internet and its future. There’s still much that can happen. Not everything that will be invented has been invented. It’s up to us, the citizens of the present transitioning into the future, to create and innovate. There’s still plenty of mystery and excitement to be had. We just need to look past all that commercial crap and, while that’s kind of hard to do these days, it’s a worthwhile endeavor.

So, go forth and run on wires of glass and electric fire!

Saturday, June 6, 2015

Supercomputers: The Deity just Waiting to Happen

IBM_Blue_Gene_P_supercomputerThe increasingly prevalent electronic technology in first-world and developing nations brings with it far-reaching, enormous, and even unseen consequences. An enormous and yet to be fully recognized consequence of all this technology is integration. You know that old expression, “Everything is connected”, well, it seems that people have taken that expression a bit too literally. Everything is connected and not just in the sense that everything influences everything else. Computers, smartphones, tablets, iPads, reading tablets, remote servers, nodes, networks, etc. are all connected in at most eight different ways. No phone is an island and all roads lead to phone. Since we have all this connectivity going on, along with the blessed always-on WIFI connectivity, is it too much to speculate that a supercomputer could come along and take command of all these devices? Perhaps the Supercomputer is already here and looks down, benevolently, upon us lowly net-runners. Let’s hope, indeed, that this new God is benevolent, lest our information be given to those who should do us harm.

Okay, that’s enough waxing poetic. Time for some analysis. Supercomputers are already here and they are “super” only when compared to contemporary processors. Supercomputer is, thus, a relative term as a computer is only “super” within its own context. A Commodore 64 isn’t “super” when compared to a Dell Inspiron Desktop computer with a 4th Generation Intel Core i7 Processor. But the Commodore 64 was a supercomputer when compared to its forerunner, the Commodore VIC-20.

What I mean by “supercomputer” is a computer that has clearly outclassed its contemporaries in processing power AND has sovereignty over an entire network of connected devices. When I say “network”, imagine a network as large as the Internet. Think of it as a God-machine. And if this God-machine isn’t already here, it’s coming to a future near you.

The supercomputer pictured above is the Blue Gene/P supercomputer at Argonne National Lab. Within its architecture, it possesses 250,000 processors all connected by high-speed optical network. Does this supercomputer fit my definition of a God-machine? To a degree, it does. The Blue Gene/P is designed to have a processing power that can reach into petaFLOPS. FLOPS is an acronym for Floating-point Operations Per Second. And petaFLOPS is at least 10^15 Floating-point operations per second. Therefore, Blue Gene/P can process over 10^15 floating-point operations per second. It’s also one of the most power-efficient supercomputers in existence.

So, the processing power does meet the requirement of being a God-machine (and there are supercomputers out there that have an even greater processing power), but is the Blue Gene/P sovereign over an Internet-like network of computers? No. At least no one has yet confirmed whether it does have sovereignty over any such network.

Voluntary sovereignty (“voluntary” meaning the clients on the network consent to being subordinated to the supercomputer) is a serious ethics consideration. An even greater ethics consideration is involuntary sovereignty over a network as large as the Internet. That’s scandal material right there. My guess, and this is just a guess, is that there are computers out there that do possess involuntary sovereignty over millions of clients connected to the Internet.

Now, if my guess is correct and these computers do have the processing power of supercomputers, it would mean the God-machines are already here. The only question remaining is just how far and how deep the hands of the God-machine reach? That’s something to which I can’t even venture a guess, so I’ll leave it up to you.

For our sake, I hope the God’s are benevolent.

Friday, June 5, 2015

Electronic Ghosts from the Past

pacman-ghostsRecently, I reset my voicemail password so I could access a function on my phone I hadn’t been able to access for many years, my voicemail. By “many years”, I mean seven years. A millennium in Internet years. When I accessed my voicemail, I was greeted by an electronic voice that proceeded to read out a timestamp on the earliest voicemail. It read “Your message from (---) --- ---- sent Wednesday, September 21, 2008 at 4:31 pm”, then proceeded to play the recorded voicemail. I had voicemail messages from old friends (who were kids at the time), school counselors, people who I have since forgotten, unknown people, and more. Some of the more fascinating, and creepier, messages consisted of unintelligible voices in the background, cloaked in electronic noise. Fortunately enough for my own sanity, the messages ended around the year 2010. I don’t know what happened to the messages that occurred after 2010. Maybe people decided to stop leaving me voicemail since I never got back to them. Anyway, I had around 20 recorded messages stored in my voicemail box. All have been deleted (or were they?). Are they truly gone?

This was an eerie experience to say the least. It was like listening to ghosts from my past, all perfectly recorded, stored, preserved, and transmitted via electronic signal. This incident reminded me of a common phenomenon in the tech world. It’s fairly common for old answering machines and phone lines to operate years after the associated business or organization has gone defunct. If you’re lucky, or not so lucky, you can hear a playback message when you call a number of a no-longer existing organization. The playback message is always creepy, no matter the content. There is a strong association with death, with the message, since the message was once used by a “living” organization. Since the organization is defunct, it’s like hearing a voice from a corpse. Perhaps now you understand why I found my old voicemail messages disturbing, if you didn’t before. Perhaps, too, this blog will go inactive one day and I can visit it again, many years from now, to peek at the works and writings of my past self. Technology will have preserved it all in fine detail. I can only hope that I won’t be too critical of my past writings.

Truly, the majority of people on planet earth have left at least some kind of digital footprint on the Internet. Most people don’t care about these footprints (likely because they don’t think about them). Other people, for various reasons, hope to wash away any footprints they have left. This is next to impossible. Your digital footprints have left behind digital dirt that will remain even after the most intense of scrubbings. It’s almost as if the Law of Conservation applies to data: though data may change its form, it will never cease to exist. My voicemail messages may still exist, in some form or other. Perhaps they are in the “recycle bin” of Verizon’s servers. Perhaps someone else may have kept copies of my voicemail messages. That wouldn’t be surprising, though I can only pity the person who has to sift through my voicemail messages. I don’t even like listening to them.

Is there any information archive more reliable than the Internet and the servers which are its foundation? Even when the power is out, the data lives on in long-term persistent storage. So be mindful of what you put, or where you visit, on the web. Anyone may have enemies and its best to not give your enemies any kind of advantage over you, be mindful for this reason. And be especially mindful for this reason : What you produce will be your legacy, for better or for worse, so be sure you leave a lasting and good legacy.

Thursday, June 4, 2015

The Technocratic Elite and The Future They Hold in their Hands

engineering-blueprintImagine you are living in a city that receives all of its hot water from one boiler room, run by one man who possesses exclusive knowledge on how to run the boilers. Now, you can imagine that this man has quite a bit of power. The city needs hot water and the boiler man is the only person who can supply the water, thanks to the knowledge that only he possess. This man is known as a technocrat. His technical abilities have lifted him above, and separated him from, his fellow man. He is a member of the elite and all people, if they are to receive the benefits only he can provide, must work to please him. And that is the future we are walking into.

Technical ability is one of those skills that has been widely prized throughout history. Those people who possessed technical ability in useful disciplines often were handsomely rewarded for their services. Blacksmiths in the middle ages commanded a high price for their services thanks to the usefulness of their craft. Merchants and artisans during the Renaissance were invaluable thanks to their business savvy minds. Today, plumbers, electricians, and tradesman are valued members of society thanks to their own technical abilities. While various technical abilities are still prized today, there are few technical abilities more valued today than technical ability in computing and mechanical engineering. But, to keep fully with the spirit of this blog, I just want to focus on technical ability in computing.

The use and ubiquity of computers has grown exponentially since the introduction of the first operating system over 60 years ago. Just 25 years ago, computers occupied most business places, offices, government buildings, organizations, and homes. Now, in 2015, computers have a hold so tight on first-world civilizations that we can’t imagine our lives without them. Just about every piece of electronic equipment we have has some sort of computing device built-in. They are in our cars, televisions, phones, tablets, microwaves, dishwashers, stoves, ovens, washing machines, cash registers, home-heating systems, entertainment systems, game consoles, etc.  You can find computers where you least expect them. I smoked an e-cigarette for a short while only to learn later that it too had a small built-in processor, which, quite frankly, blew my mind. It was at that time that I fully appreciated how far and how deep computers have spread and dove.

What does all this mean? It means technical ability reigns supreme in the hands of the technocrats.

But what is a technocracy? In the opening paragraph of this post, I called the boiler man a technocrat because he had possession of useful and exclusive knowledge. However, he’s only a technocrat in an unofficial, but important, meaning of the term. The traditional definition of a technocracy is as follows: A technocracy is a system of  government that is controlled by a group of elite technical experts. The idea of a technocracy first emerged in the early 1900’s. The original idea was to have scientists and engineers replace business people and politicians in governing society. Sounds good, right? Well, I’m not hear to argue about the merits of a technocracy. Instead, I want to focus on the technocrats themselves and the hold they have over most people in our high-tech civilization.

In the United States, we don’t have a technocracy by law. We have a Constitutional Republic with representatives that are elected by their voters. A technocracy doesn’t work that way. Instead, the lead technicians assume social and governmental control over the populace by virtue of their ability. Technocrats are intended to be benevolent dictators who know what’s best for the common man and are to act in the common man’s best interests.

To reiterate, a technocracy is a system of government controlled by elite technical experts. The United States doesn’t have a technocracy, or does it? As mentioned above, the United States isn’t a technocracy by law. Instead, I’ll argue that it is a technocracy by fact.

Think about it. We are absolutely dependent on our technological devices. Most people would be severely lost without their personal gadgets and gizmos. Our technology has, on one had, greatly improved people’s standard of living. On the other hand, it has made most people incapable of living without it. You could almost say that most people are “ruled” by technology. And who rules the technology? Who designs it, implements it, and understands it? The technocrats (and boiler man) of course. Thanks to our sophisticated technology, we live in a de facto technocracy! The technology that we depend upon has elevated the technocratic class to a god-like status.

What a disappointment if you aren’t a technocrat! If you are a technocrat, it’s a pretty good deal to say the least.

Is this a sorry state of affairs? Not necessarily provided our technocratic overlords are a kind and benevolent bunch. But if you aren’t comfortable with this state of affairs and can’t stand the thought of being at the “mercy” of the technocrats, you can always improve your own technical abilities. That is the route I have chosen and I don’t see myself regretting it. Technology is here to stay. What matters now is whether you choose to understand it or leave that responsibility to the technocrats.

Wednesday, June 3, 2015

Website Aesthetics

23424314321Does anyone else dislike the so-called “modern” aesthetics of some websites? To see what I mean, just look at squarespace. These websites are going for the sleek, simple, “no borders” look. And while the goal of the modern layout is to offer a sleek and smooth experience, it only seems to offer a confusing mish-mash of web content.

I’m just glad most sites have stuck with the a more traditional web-style. Don’t get me wrong, I do like some modern web aesthetics. Just look at my blog. It’s a combination of old and new web-styles (To see an example of the old, just look at cyber.eserver.org). When editing the HTML/CSS of my blog, I do my best to take the best aspects of both the old and new styles, and integrate the two in order to make my blog both functional and stylish. Websites that use the “modern” style exclusively seem to be going for style at the expense of function. And it’s questionable whether they’ve even achieved style.

Anyway, this will be a short post. I just wanted to give my two cents on the matter.

Postscript: Here’s another site which I think successfully melds old and new web-styles.

Programming a Ghost

Ghost-vulnerabilityThe worth of anything can be measured by its tendency to inspire thought. In other words, if it can make me think, then it has to be good! Fair enough, right? Ghost in the Shell: Stand Alone Complex is one of the few anime that has provoked a good deal of thought in me. In fact, no other anime has inspired me to think about such worthwhile topics like consciousness, perception, metaphysics, artificial intelligence, programming etc. Lately, Ghost in the Shell: Stand Alone Complex has brought yet another idea to my mind: the idea of the ghost. For those unfamiliar with the series, a person’s personality, thoughts, behaviors, attitudes, ideas, etc. is referred to as the person’s ghost. In a nutshell, a person’s ghost is their conscious (I’ll use the words “ghost” and “conscious” interchangeably in this post). The ghost is one of the most referenced themes in the series. The plot continuously relates to the ghost and a lot of speculation is made by the characters about the ghost. It’s even speculated that a ghost can be created, or emerge, from self-modifying and self-improving artificial intelligence.

So, can a ghost be programmed or emerge from an existing program? Before we can determine this, we must know how we determine the existence of a ghost in a person. Well, it’s pretty obvious that people have a conscious right? We see a person, we hear them talk, we listen to them as they express themselves, we observe them as their ideas, opinions, and attitudes change. It’s pretty clear that people are conscious. People are even conscious of themselves. They are self-aware. Can a program become self-aware?

Now that we know how to determine whether someone has a ghost, can we tell whether a machine has a ghost? Enter the Turing test. A Turing test measures a machine’s ability to exhibit intelligent human behavior equivalent to, or indistinguishable from, that of a human. To pass a Turing test, a machine must convince a person that it is human at least 30% of the time in a series of five minute rounds. Has a machine ever passed a Turing test by exhibiting intelligent human behavior? Most machines are quite primitive in their ability to act in an intelligent manner (compared to an intelligent human). Then again, most people don’t act in a reasonably intelligent manner (compared to an intelligent human). Therefore, most machines will perform in ways indistinguishable from that of most humans. Defining intelligent human behavior is another ordeal altogether, but I trust the people who implement Turing Test’s to know what they are doing. Okay, now that we’ve got that resolved. The answer to the aforementioned question is “Yes”, a machine has passed a Turing test.

So, a machine can appear to be a person by appearing to have a ghost, a conscious. It should be noted that appearing to have a ghost and actually having a ghost are too completely different things. However, the fact that a machine can appear to have a ghost makes it indistinguishable from actually having a ghost. Let us note that the machine appeared to have a ghost during a Turing test. Turing tests are made up of many, five minute rounds. I’m sure the machine would be less likely to appear to have a ghost if the rounds were longer or if the questions became more complex or if the environment were altogether different. Additionally, appearance is subjective. The machine could appear to be human to one person and non-human to another person. Additionally still, it is important to how we define what a reasonably intelligent person would do in a certain situation. Context matters.

All in all, even considering my reservations about the Turing test, the fact that a machine has passed the Turing test is a great feat in programming and, perhaps, we are well on our way to seeing the emergence of a ghost within a machine. But, for a moment, let us consider how difficult it would be to program a human consciousness.

A comparison may be made between basic brain activity and basic computer functions. The brain is made up of roughly 40 billions interneurons. Neurons are a lot like on/off switches. They require just the right amount and kind of chemical signal before they “fire” their electric charge down the axon to the axon terminal. And that’s all basic brain activity is, just an enormous series of neurons getting triggered and releasing their chemicals which, when performed en masse, form human thought. Basic computer functions work much the same way. The most basic machine language is known as binary. Binary is just a long sequence of 1’s and 0’s. The 1’s represent “on” and the 0’s represent “off”. They either “fire” or they don’t “fire”. See where I’m going with this? The most basic machine language seems to mimic the most basic brain function. Since this basic brain function, performed in aggregate, form the human conscious, does that mean a sophisticated enough binary could form a conscious equivalent to a human’s?

Let’s say that my comparison above holds weight, what would it take to program a conscious? Well, the code would likely need to be similar to that of the human brain. It would need to be sequential, self-modifying, and a little illogical. Would simply programming a personality using 40 billion lines of code work? Will the code become self-aware? That’s hard, if not impossible, to tell. Could the code be programmed to continually expand itself by incorporating new data into its structure? Self-modifying code is already possible, but could a code continually self-modify to same extent as a human brain? Would all this self-modification lead to the emergence of a ghost? Possibly, but, sadly, we may never know.

Due to the limits of human perception, we may never be able to tell whether a machine truly possesses a conscious or whether a machine is self-aware. Yes, we can use Turing Tests ad infinitum and the machine may always convince the human that it is a human, instead of a machine, but, as mentioned earlier, this doesn’t mean that the machine has a conscious. It just means that the machine is sophisticated enough to trick a human, and that’s it.

Though we may never know for certain whether a machine has a ghost, we may still highly suspect it. Electronic computers have come a long way since their appearance on the scene in 1942. And some computers are now sophisticated enough to convince a human that they do have a ghost, even though they may not have one. Yes, I just admitted that I think the machine that passed the Turing Test may have a conscious. Indeed, who’s to say the machine’s we currently rely upon don’t already have ghosts? After all, we as humans can’t really know whether a machine is truly conscious of itself. But, nevertheless, it’s quite fun to speculate on. I predict that, as the years go on, machines will convince us that they are conscious of themselves. What then? The political, social, and economic repercussions would be enormous. But I won’t talk about that now. For now, let’s just humble ourselves with the fact that we may never know whether a machine is conscious. If its any comfort, you can look forward to the possibility that, if machines do develop consciousness, it will happen in the near future. The future is always closer than you think.

All in all, just something to think about.