Lecture 12 – Hacking

Although I didn’t manage to attend this lecture due to stress of looming deadlines and meetings with tutors in order to ensure I actually had a clue what I was doing on the assignments, I have been through the reading and lecture slides for it and have drawn a few conclusions.

The first thing to note about hacking is the public view and reaction to it. A number of current and past representations and discourses around hacking, tell us that hacking is a negative thing, something that poses a risk to us all, our security and privacy. Any warnings and signs we are given, exaggerate any ‘danger’ that could be perceived to be as a result of hacking. However, in contrast to the misinformation we are fed, there is also an apparent lack of information, raising our suspicions and concerns in relation to hacking.

It is argued by groups such as Anonymous and more, that hacking is about attacking anyone or anything but rather about taking back rights of freedom and privacy, that are slowly being taken from us. This doesn’t mean a desire for no regulation or laws, but rather for an absence of constraints and a focus on freedom of speech.



Although many believe hacking is focused on hiding and secrecy, it is actually about transparency, as well as collective and individual privacy and security. It’s about community and non-profit groups/individuals rather than business and organisations who are out to make money or governments out to enforce rules and regulations. It’s about grouping like-minded people together to protect rights and privacy against capitalistic ideals.

These views might not be shared by everybody but, those who practice, enjoy or admire hacking and hackers, generally share the view that it’s to protect and take care of everyone, and since those in charge of our organisations, countries etc. don’t seem to share this view, it has set one very powerful group, against another group that is gaining in strength as time goes on, as more people realise that their privacy, is not in their hands.



Lecture 11 – Big Data

For the lecture this week we focused on the use and limitations of big data. Collecting big amounts of data can be useful but it’s full of challenges in terms of physically collecting it, storing it, the purpose of collecting/having big data, distributing big amounts of data etc. There’s also privacy and security issues – the big data we’re talking about isn’t in reference to the football scores this year, it’s peoples’ geographical movements, election results, social media usage etc.

Big data has been classified using the ‘three V’s’ – high volume, high velocity and high variety. These are the three variables that need to be considered when talking about big amounts of data however, they only form half of the definition put forward by Laney. The other half focuses on the usage of the big data once it’s been collected. Not only are there technical problems when collecting big data, there are also operational meaning problems – information has to be useful and then implemented in the right way. Jacobs states that big data demands “cost-effective, innovative forms of information processing for enhanced insight and (big data-driven) decision making”. He also notes that the key pathologies of big data, are those of analysis. (Jacobs, 2009: 39).

Within the current technological age, big data is seen as a commodity – high amounts of data on certain things can influence decisions. This information can be used to make profit, effect politics and more. Although we may not realise it, most of us see the direct effects of big data everyday and how those who use it make it manageable and productive. For example, Facebook, Google etc. use computer algorithms to provide us with information we would find useful – showing us posts from people with interact with most over those we don’t talk to much, showing us results on Google based on our location to limit the amount of results our searches have. This is done so we have a better user experience but, these sites need to gather information about the user before it can hone big data down to a more manageable size for us, and how does it do that?

This is where issues around privacy and security come in. Legislation such as the Investigatory Powers Act and the Right to be Forgotten now form an important part of security measures being enforced all over the world. Internet service providers and mobile phone service providers are being forced to store customers browsing and usage history. This data can then be accessed by police and government department which could lead to profiling of specific people in society, there are also security issues with this big data as it will need to be stored in server facilities – making it vulnerable to thieves and hackers. It’s also concerning what governments and agencies can use this information for in the long-term as they don’t need to tell us when and why it’s being accessed. The Right to be Forgotten is also important to consider as it means we have the right to delete personal information from the records however, it’s not an easy thing to do and comes with complications – does it make it look as though we have something to hide?

The security and privacy issues are definitely something I want to consider for my approach to Assignment 2 on this module, where I will be writing a report based on a thesis of my design and answering a specific question of my choosing in relation to the subject matter discussed on the module and beyond.


Lecture 10 – Software

So, we’ve discussed hardware, now for software. As I said in one of my previous posts, one is useless without the other – technology requires hardware and software working together. In this lecture we looked at the beginning of software as a concept, the ecology and epistemology of software and meta-mediums.

From Small Beginnings

Most people will never have heard of Margaret Hamilton and yet within the realms of software, she’s one of the most important people there is. Hamilton pioneered the concept of software engineering and helped NASA in their quest to get humanity to the moon in the 1960’s. Thanks to her, notions of what software was, what it could do and how we could use it, became something to focus on. She envisioned a world where hardware and software could work together and be used by all, for the benefit of all, rather than as something exclusive and expensive. In the early days, software engineering took years to implement and it’s taken Hamilton most of her life to see software spread out across the globe from its beginnings at NASA.


As I discussed in my blog posts related to the classes I had on ecology, no media can work alone, it takes a combination of hardware and software, multiple programmes and networks, working together to create and use technology. Without ecology, web services, the internet, mobile applications, online gaming and more would not be possible – you could still have a TV, mobile phone or laptop as that’s just the physical hardware that is built from components – they just wouldn’t be able to do much.

One of the leading voices on the ecology of software in media is Manovich. He’s written extensively about how our technological ecology is made up of countless parts – some of which are software based. I’ll be talking more about Manovich and his book – Software Takes Command in the seminar 10 blog post.


Keeping it simple, epistemology is basically knowledge and understanding. It’s not only the information we have but how we perceive and make sense of that information. So, you might know that the moon orbits the earth but, this knowledge means nothing if we don’t understand what that means, how this happens, what it happens etc. We have to justify and rationalise the knowledge the way in order to put it into perspective and make it useful.

Our brains can be directly related to the storage hard-drives in computers and other pieces of hardware. The information is processed by our senses – touch, taste, sight etc. and stored in the brain, computers will receive information from the user and store it in the hard-drive. Both the human brain and the hard-drive have a maximum amount of data they can store – a capacity, and both have to be able to process and understand the data stored within them – otherwise there’s no use in storing it. If you saved a Microsoft Word document onto your PC but didn’t have Microsoft Office installed and therefore couldn’t access the document, what would you do? You’d delete the file.

In a world so reliant on software and so full of data and information, we are now in an age where we cannot understand and make sense of all the information we have. We rely on computer algorithms to make sense of data and display it in a way we can understand – usually visually. If you collected the traffic data for an entire city for a week, there’s no way you’d be able to spot patterns and correlations within that huge amount of data, or be able to express them in a useful way. This is why we rely on visual techniques to view the data and software to store and analyse it – a computer can carry on millions of calculations and make decisions base on the information and settings it is programmed with – the human brain is a marvellous thing but it can’t compete with that.


The computer is not a medium on its own, as we understand them to be. But rather it’s one of the first meta-mediums. As Manovich describes, the computer is a “combination of existing, new and yet to be invented media” – when we use a computer we use pieces of code that were written decades ago with companies like IBM and Microsoft, they have since been given a new case and many tweaks but, some of the principals remain – using code to write programmes that can be accessed from a main screen – the desktop. Technologies such as web services, sound, cameras, disc drives and more are contained within the software and hardware of a computer – making this a distinct meta-medium – an individual ecology that links to the wider network of the media ecology.


I think that’s enough for today, I’ve covered the most thought provoking points raised in this weeks lecture. Check out my seminar 10 blog post to read a little more about Manovich and his thoughts on software!

Lecture 9 – Hardware: Antennae and drones

As I mentioned in my last post, the next few classes in this module will be focused on hardware and software – the two entities that make up modern technology. When you buy a mobile phone, it’s not just the hardware you need – the screen, the case, the antennae, the camera etc. it’s the software inside that allows you to make calls, browse the web etc. Hardware and software working together – one without the other isn’t much use.

I’ve already covered the first class based on hardware in my most recent blog posts, in the class we discussed some of the social and political issues around hardware. What are the implications of using rare earth metals in the production of technologies? Where does our data go when we save to the cloud or send something through the internet? Who has access to this information? These are all questions we should know and understand the answers to, it’s in the interest of every human on the planet to understand what freedoms we have and what freedoms we only believe we have.

This weeks class focused more on the physical hardware itself. What are drones? What are they used for? The evolution of mobile phone technology. The ‘internet of things’ and ubiquitous computing – the development of the internet.


Although there’s lots of military connotations still surrounding drones, they’re becoming increasingly popular as something to be used for creative and entertainment purposes. They can be used for filming aerial footage, taking birds-eye photographs or just for flying in the local park on a Sunday afternoon for fun.

The term drone originates from the honeybee drone – the male bee who mates with the queen in order to continue the good work of its hive. To make a long story short, the make bees are there to mate with their queen, whilst the female bees – the workers, do all the work and collect the nectar for the hive. The drone is named after these male drone honeybees as they have stouter bodies than the workers, with larger eyes, although they’re larger than worker bees, they can match their speeds.

Drones are crucial in a world where information is key. They are part of a network of ‘smart’ objects that collect data and interact with other hardware. Although they can be used to simply fly over an area, take images and store them within a microchip, they’re primarily designed to relay information via a data-link – transmitting the data they gather to other pieces of hardware, either on land, in the air, in space or at sea. Although many have made a connection between drones and the age of ubiquitous computing – the third age of computers we are currently experiencing, this may not be the case. This form of interactivity relies heavily on an infrastructure of hardware and software, working together in order to collect and make sense of the data collected by drones.


We then moved on to discuss the development of mobile phone technology. Over the last 30 years huge leaps have been made, and the leaps can be measured in periods lasting around 10 years each. From 1981 to now, from 1G, through to 5G, a number of key discoveries and creations have allowed for new experiences and uses for mobile phones.

1G – Basic mobile phone technology, allowed for calls to be made using very large devices that required battery packs and large antennae connected to the device. Up until around 1981.

2G – 1981-1991. Introduced more efficient, digitally encrypted calling systems, as well as new services such as SMS (text messaging). There was also an advance in the data storage capacities of mobile phones, in order to allow text messages to be saved to devices (late 1980’s).

3G – Based on a set of Universal Mobile Telecommunications Service standards. Allows mobiles internet access. The US flagged behind the rest of the world here due to the military owning the necessary lower frequencies that allow for faster, more efficient mobile communications. Here’s where the iPhone comes in – making the most out of internet access and giving people applications (apps) so they can use their mobile device for a wide range of tasks. 1991-2001

4G – 2001-2020. In 2008 the International Telecommunications Union set out a specification for 4G connections. These requirements included setting peak speeds for high and low mobility communications. For high mobility communication (trains, cars etc.) the connection would need to be around 100Mbit/s and for low mobility communication (pedestrians or stationary users) the required speed is 1Gbit/s. High demand means more mobile phones, needing more data and therefore more low frequency networks and more antennae. 4G allows for data hungry apps and tasks such as Skype, mobile online gaming, and HD video streaming.

5G – Focus on aiming for higher capacity rather than faster speeds and ensure services require less data and battery power – lower running costs, lower battery consumption etc. Be 100% reliable, especially in times of crisis or natural disaster. Also needs to meet the needs of the third age in computing – the Internet of Things, with more and more devices connecting to the internet using data networks and the lower frequencies being filled up, there needs to be a focus on finding more ways to streamline data and make the most out of the connections we have.

Ubiquitous Computing 

We are currently at the start of the third age of computing – the ubiquitous computing age. First there were mainframes – where lots of people shared the same machine, computers were something to be purchased as a one-off by a research or education centre as they were very expensive and hard to maintain. Next came the age of personal computing, where individuals use their own devices for specific purposes. We are now emerging into the age of ubiquitous computing, where technology recedes into the background and becomes something less tangible, something we take for granted and rely upon so much, it’s no longer something to focus on.

Mark Weiser coined the term ubiquitous computing in the late 1980’s and has become the father of the Internet of Things. He foresaw that computing would become something human centred and that future generations would rely on technologies that were only in their youth at the time. He imagined a world where we wouldn’t depend on one device per 100 hundred people, he also saw past the current age where we rely on our personal devices for our interactions and personal information. “We will not need to carry our own devices around with us. Instead, configurable devices, either handheld of embedded in the environment, will bring computation to us.”

The 1990’s saw around 1 billion devices connect to the internet, the 2000’s added another 2 billion. The Internet of Things predicts that humanity will connect a further 28 billion devices to the internet by 2020 and we are constantly finding and creating new devices to connect. Not all of what Mark Weiser said about the ubiquitous computing is around us yet but, everyday we get closer.



Lecture 8 – Hardware: Silicon and Political Spaces

Originally silicon referred only to a specific element on the periodic table. It’s used to create a wide variety of products – including many of the modern technologies we take for granted. Silicon Valley is a renowned location in the United States, it was first named due to the number of innovators and manufacturers in the region using the silicon chip and is now home to a number of the wolds largest technological corporations. Companies such as Facebook, Google, Yahoo, YouTube and more have there head offices in the area, there are also thousands of start-ups in Silicon Valley.

Silicon Valley is evidence that there are geographical centres for technology and innovation. Specific spaces and how they are used are not chosen at random, but rather appear and evolve over time due to political, economic and social developments. The evolution and control of these spaces effects us all and is most certainly a specific political issue – one that most of us do not typically consider.

As well as geographical centres for technological development, there are also centres for internet networks – areas where servers, companies etc. are based. For example, the recent DDoS (Distributed Denial of Service) attacks were focused on specific companies and networks, interactive maps of the attacks show the areas where network activity and usage is highest, as well as key points and locations where businesses and their servers are based. As you can see on the image below, areas in America, the UK and East Asia are the busiest in terms of traffic and clear centres for internet usage are evident.

Image result for ddos attacks map

When we think of the technological giants and where they are based, this map makes sense. It’s understandable that the economic and political powers of the world are hubs for internet usage. However, where does that leave the citizens of these areas? Do they benefit from the arrangement? And what about those in countries with no servers, limited access to internet and very few technology companies? We have to remember that the geography of the internet has real implications and that inequalities can effect people and places on a day-to-day basis.

As well as what is physically around us, the politics of the technological age we live in, also concerns what we cannot see. The internet and the information contained within it is something that is non-tangible and therefore hard to control. This means that those who do manage to have control over significant data and/or large amounts of information are firstly powerful, influential and financially well-off enough to get hold of the information and are also capable of making large amounts of profit from this data.

Saving our information on cloud servers is becoming ever more popular. We backup our whole lives to these mystical clouds that we cannot see or understand without asking who has access to the information. Our photos, documents, videos etc. are uploading, we back-up our mobile phones, music, contacts and more without a second thought. And then there’s our e-mails. We send them for business or pleasure and we assume their contents are private since we send them to specific people but, they aren’t letters. They aren’t something physical we can seal between here and there.

The personal data of millions of users is connected through the internet, using cloud servers, e-mails, social media accounts and more. But what do the companies who run these cloud servers get from our data? What do they stand to gain from knowing these things about us? To make a long story short, they sell this data. To those who are willing to pay for it. Those willing to pay for it are usually those who will gain the most from having it. You guessed it. Advertisers.

If they know what programmes we watch, what bars we visit, which actors we have a thing for, what books we read, what technologies we use and buy etc. they can target us in knew ways and be more specific about what products are advertised to us. Thus saving them money in the long run as they can narrow their margins and making them money in the long run by ensuring the products that are marketed directly to us, actually appeal to us in the first place based on our user history.

These issues and arguments are something we all need to consider, technology and the information we share will only become more and more powerful from here and it’s definitely within our interests to be careful with it and protect it.

I’m gonna leave this blog post here for now. I’ve covered the main points raised in the lecture and, hopefully, given you something to think about. The seminar will look in more detail at Silicon Valley and the physical aspects of technology rather than the abstract and magical data of the cloud. It’ll be interesting to discuss the social and political effects of producing the technologies we so rely on, as opposed to the social and political effects of the internet and the information on it.


Lecture 7 – Hard and Soft Technology

This week the turn out and focus levels in the lecture were pretty lax! Since straight after the lecture, the first seminar group was presenting their Prototypes – myself included, people were either too busy working on the assignment to come to the lecture, or busy thinking about the assignment and not fully paying attention to the lecture.

However, this isn’t a major problem, as the lecture basically served as an introduction to the topics for the next two weeks and prompted us to think about certain elements of technology individually before the next two lectures. So over the next few weeks we shall be discussing first hardware and then software. First though, we must think about how we define hardware and software. Does Cloud technology work as software, even though it’s based on hardware? The items we save to Cloud servers is physically saved to a hard drive somewhere, we just can’t see it.

We also discussed Halt and Catch Fire, (Season 1 trailer above!) a series that began airing on AMC in 2014 and is now available on Amazon Video. Our lecturer highly recommends that we watch this series, which follows the rise in popularity of personal computers in the 1980’s and the invention of the internet in the 1990’s. After checking out the trailer, it’s definitely on my to-do list to have a butchers at this interesting looking series! In the meantime I’l be doing some research into the contrast between hardware and software and getting the reading done ready for the next two lectures!

Lecture 6 – Mobile Phone Technology

Unfortunately due to being ill I missed the lecture this week. However, the reading and lecture slides have been made available to me online so I managed to look through those and get a basic sense of what I missed, although it is tricky to understand some of the lecture slides, having missed the actual class.

So the lecture this week focused on mobile phone technology – how we use it, how it has evolved over recent years and decades and what effect it has on us as individuals and our society as a whole. It’s amazing to see how much technology as moved on over the last 20 years in relation to mobile phone technology, especially when we look at what the technology is focused on.

For example, in the early 21st century mobiles were focused on being small, efficient and allowing users to be in constant touch with each other – obviously when we think about mobile phones now and the constant connectivity they offer through various mediums this idea is laughable, however, back in the 2000’s, it was THE thing and everybody was excited about it. Whereas nowadays, the focus is on the camera technology, how big the screen is, the sound quality, storage capacity for downloading movies and music etc. Our ideas and needs evolve just as much as the technology itself.

The lecture also looked at how mobile phone technology effects the individual as well as society as a whole. Does it make us more connected and therefore more prone to communicate, network, stay in touch with friends/family etc. or rather does it distract us from real life and hone in on our need to be ‘doing’ something. This links to ideas around whether people use the mobile as a shield, to avoid talking to others and socialising – by presenting an image that they’re already communicating with somebody else.

This blog isn’t overly detailed but, I wanted to write a little about some of the things covered in the lecture – from the information I received online and the set reading for the class, just to discuss some of my ideas and thoughts on the topic!

Lecture 4 – Power, Policy and Governance

In the lecture this week we began discussing issues surrounding copyright and the governance of the internet. Since we also needed to chat about some problems in relation to the assignments on the module and watch the 20 minute video I’ll be talking about in this post, we didn’t actually cover a huge amount of material or do much thinking in the lecture, however, we did have a 2 hour seminar right after where we could further explore case studies and arguments relating to how the internet is governed and what constitutes copyright etc. You’ll find my thoughts on the seminar discussions here.

So, our main focus for the lecture was a particular speech made my Lawrence Lessig where he considers the absence of common sense in relation to copyright laws and calls for the revival of our creative culture. Lessig has long criticised strict copyright laws as a restriction on creativity, which effects and will effect the younger generations especially.

In his amazing speech he passionately and creatively gets across his ideas and thoughts about re-write culture and copyright issues. Lessig notes that the law does not consider digital information and technology as ‘free’ but rather goes with the idea that every use of culture is ‘copying’. For Lessig, the only option is to fight this apparent lack of common sense with, common sense.

John Philip Sousa

Near the beginning of his speech, Lessig talks about a chap named Sousa, an American composer. In a letter to congress in 1906, Sousa argued that the latest technology to emerge in the music industry – a device that allowed the user to record music and apply it to an version of the record, was going to “ruin the artistic development of music in this country.” He believed that the recording technology would spell the end of the vocal cord. Sousa romanticized about his childhood, where children would spend time singing songs on their front porches as a means of keeping themselves entertained.

Lessig warmly refers to Sousa’s protests against the new “talking machines” in relation to his views of the power of user-generated content. This new technology displaced creativity in the 20th century, people would rather listen than create. Lessig believes that the internet can revive the re-write culture and bring the focus back to amateur, user-generated content – in a similar way that Sousa romanticized the passion and creativity of his youth.

So what do we need to do?

As well as discussing his disillusionment with the world of copyright and freedom of information legislation, Lessig outlined a number of steps he believes we as a society should take in order to protect our creativity and passion.

  1. Campaign to the courts – policy makers need to be made to understand the importance of available copyright-free content online and change the relevant legislation
  2. Think of the economy – those in charge need to realise the potential boost to the economy that could come from the increase in creativity if legislation is adapted
  3. Content availability – content creators need to make their work more available so that other creators can access it and adapt it
  4. Get businesses involved – businesses need to express interest in re-write technology. The more free content there is, the more it can compete with content that is not free
  5. New technology – new advances and technologies need to be open for business


As well as outlining certain steps that society can take in order to change and influence policy on copyright infringement and legislation on freedom of information, Lessig concludes by reiterating that user-generated re-write content is NOT piracy. And that we need to pass this message onto our children and create a technological system whereby they have the opportunity to be creative, innovative and do something different, using whatever content they can get their hands on, regardless of where it is stored and who created it to begin with.

He concluded by emphasising that our children are different. You cannot stop children from using new technology creatively, that’s what it’s there for and creative thinking is what being a child is all about. You cannot stop children being creative with new technologies, you can only criminalise it. And if this is what the powers that be are focused on doing, we need to be focused on stopping them because at the end of the day, who will lose out? The children. By creating legislation that will thwart creativity, our government is giving children the message that they live against the law by being creative with technology, and this is what we need to stop.

The ideas conveyed by Lessig in this TED speech are ground-breaking and, they cannot be ignored. By you. By me. By those in influential and powerful positions. By businesses. And especially not be government. It’s just up to us to put the message across and protect our creativity, innovative technologies and OUR internet.

Lecture 3 – Ecology

For the week 3 lecture of Media Technologies, it was time to think like a scientist. I don’t mean we got out Bunsen burners and started setting fire to things in the media hub (unfortunately), rather than needed to think about terms such as evolution, natural selection and environments. Of course it’s not entirely necessary for those theorising media technologies and the connections they have with society, education etc. to consider media in this way however, Marshall McLuhan found this comparison both intriguing and useful and, I must admit, so do I!

McLuhan revolutionised how we see media and how it affects society on a global scale, there’s no denying this. However, he took it one step further. He believed that no media worked independently, but rather worked together in a system of “constant interplay”. This is where the science comes in! McLuhan likened this media system to a complex eco-system of interactions – much in the same way that the world around is a delicate ecological balance of animals, plants, bacteria etc. But what made McLuhan notice and believe in the similarities between a scientific eco-system and a technological one?

As we have heard many times from  Professor Xavier of the X-Men franchise, new creations in nature are born from genetic mutations. Why do we have blue eyes or red hair? Because of a mutation within our DNA. So how do we explain the advances made with the release of a new smartphone or console? No advance or technology appears from nowhere, it’s a result of hundreds of leaps forward and years of innovation and creative thinking. From the days when the printing press revolutionised the world of books and mass media really took off, to where we are now – a huge amount of books stored on the cyberspace of the world wide web, ready to download as PDF at the touch of a button.

However, evolution cannot be thought of as a linear process but rather a gradual transformation of processes that inter-connect with each other – with specific advances in one technology, affecting what can be done with another technology and so on. McLuhan states that “no medium has its meaning or existence alone”, he believed different mediums and media technologies are like species living within the same ecosystem and that they establish relationships between them. Each species learns from the other, competes with the other and is constantly adapting to survive and thrive within a competitive ecosystem.

As well as mutations, natural selection also plays a part of the eco-system of media technologies. I’m assuming you have a Blackberry device? You don’t? Well then maybe an Android device? Or iPhone? There was a time when the Blackberry was the ultimate PDA – personal digital assistant. The Blackberry device was incredibly popular with professionals and businesses as a way to keep track of appointments, deadlines, meetings etc. However, it’s since become a relic as iPhone and Android products have caught up and overtaken the Blackberry in terms of innovation and sales. Here is a perfect example of how natural selection allows the dominant species – or in this case product, to come out on top.

One of the most important concepts created and championed by McLuhan is that “the medium is the message”. Meaning, it’s not the way we use the technology that’s the important part, it’s the medium itself, how we interact with it and how it this influences our lives and society. McLuhan also added that the contents of a medium, is in itself also a medium, so it’s not just the book that should be considered a medium, it’s the words and pages within, that form mediums also.

As you can see from the video above, which shows an interview with McLuhan from 1967, he doesn’t take himself and his work too seriously. For this reason and several other, McLuhan and his theories have been heavily criticised. As he often speaks with metaphors and similes – likening one concept with another, relating scientific evolution to how we advance our digital technologies, many have been confused by his (at times) ambiguous ideas. Most people think in very linear terms, they research, develop and theorise in order to develop their learning, and therefore for 2 and 2 together to make 4. McLuhan was more concerned with creating new ideas and challenging his learning by questioning himself at every opportunity and constantly developing and adapting his ideas.

I found the ideas put forward by McLuhan incredibly far-reaching and interesting, I’ll be discussing them more in the Seminar 3 – Ecology post!

Lecture 1 – Invisible Technologies

One of the mandatory modules for Semester 1 of my 3rd year, studying Media here at Sheffield Hallam University, is Media Technologies. The module focuses on the technologies – old and new, that we use in relation to the media, both in the present day, in the past and for the future. Not only does this include the obvious technologies such as mobile phones, the internet, laptops, cameras etc., but also things like radio, analogue television, the written word, virtual reality, the Raspberry Pi and more.

The first lecture acted as an introduction to the topic and to get us starting to think about the technologies we use, why we use them and how. My media technologies lecturer is very interested in the idea of ‘invisibly technologies’. What technology do we use every day and take for granted? What technology do we not see at all and yet depend on?

We began by discussing the idea that we ought not to be ‘monkeys with weapons’ – meaning, we must understand the technology we use, what it does, how and why. This has never been more important than now, as technology has become more and more powerful, the mobile phones we carry in our pockets are highly capable machines, the social networking sites we use so much can be used for many reasons and we should understand the power of technologies such as this if we are to use them properly and safely. By understanding the technology available to us, we can control it, rather than letting the technology control us. For example, we may use our mobile phones for many things however, are they safe? Can we use this technology and still maintain our privacy?

We then moved on to discuss how advances within the industry of media technologies can disenthrall users. We are constantly improving and developing the technology we use however; does this make us numb to possibilities? For example, over the last 5 years what advances have really been made in terms of the technology used on the iPhone? And has this made any difference to the number of people who have purchased the latest iPhone within 3 months of its release? Are we blind to the possibilities of the technology around us because of what we already have? For example, are we so impressed with what the current iPhone can do that we are not interested in what it could be capable of if it were further developed?

Our thoughts then turned to some of the first media technologies humans used and relied on. Although many do not consider it to be so, writing is in fact one of the most popular media technologies to ever have existed. As it is often overlooked as a technology, it can be referred to as a common ‘invisible technology’. Before the written word was created, people could only communicate using voice and relied on societies and communities – news and information could only be conveyed directly from person to person. Thus, writing is the definitive media technology, in that it requires a physical tool –  for example, a pen and paper. It is also still the most common media technology, whether using a laptop, mobile phone or good old fashioned pen and paper, it is still one of the first things we learn as children.

These are the main points I picked out from this first media technologies lecture, although we also looked at a number of Greek philosophers and writers, as well as various language traits and writing patterns, I feel that these are the most important pieces of information to take away from this particular class. I’m certainly looking forward to finding out more about our assignments and the topic in general in the coming weeks!