Logo

42
issue
42 (137) 2020
Download PDF-version

Digitalisation

The ethical implications of the new technology

By Erling Hesselberg, vice president, Crayon
Header erling

 

As applications of artificial intelligence (AI) have proliferated into society, so have debates around the ethics of data and communication.

There is much hype around the Internet of Things (IoT) at its interface with AI and machine learning. The applications seem limitless as physical things start to communicate with other physical things wirelessly, bringing us a fast, contextual service. Take the likes of the Amazon Echo device, that can answer all our questions and operate other technological items in the home. The vast skills of the technology has continued to grow, and it is becoming an integral part of everyday life. According to Business Insider, more than 75 billion objects will be connected to the IoT by the end of this year. A poll by opensource.com four years ago showed most respondents didn’t own any IoT devices but now, they own an average of at least three each.

This large-scale technology expansion has led to several ethical issues. IoT and advanced data collection makes it easier to track and predict user behaviours, with AI simplifying the task of what end-users require. With growth and technological advances, there are far more places to drive usage across many devices. The merger of IoT and AI in this way creates fear from consumers about the privacy and security of their personal data, and also the accuracy of the predictions.

Ethical Issues with IoT
The main dilemma being faced by companies is that whilst consumers use their IoT products, there is a lack of knowledge and understanding as to how they work. And the general consensus around IoT is that security is worryingly underdeveloped and incredibly prone to cyber-attacks. A Forbes report said that last year saw a 300% increase in cyber-attacks via IoT devices. This is being put down to the exponentially growing number of devices, coupled with a lack of focus on security, as opposed to innovation. IoT devices being released to the market have a “fire and forget’ feel. Companies are releasing them to the market but failing to put relevant updates in place to ensure they remain safe.

The Mirai bot attack of 2017 trawled the internet for unsecured IoT devices like cameras and routers, which still had factory default passwords, and took them over. The malware installed upon these devices then made repeat requests to try and totally overwhelm US websites, attempting to take them down. Having the devices available puts others at risk if they are not secure, and if people using the devices do not have the knowledge on how to use them this creates an increased risk. And who is liable for the damage done by such breaches?
A further example was fitness app Strava was revealing sensitive military information. It was possible to see the jogging routes used by soldiers around military bases. A network of insecure devices actually left people at risk of physical harm especially if it got into the wrong hands.

A big reason why IT security is not a prevailing feature in IoT is cost. As security features get more complex, devices cost more to make; this doesn’t work for businesses in a competitive, and in most cases start-up, environment. There is an ethical call here for governments to do more with the existing minimum regulation requirements. The challenge comes in having regulations that don’t prohibit innovation and achieving the right balance between innovation and security.

Privacy
IoT devices collect vast amounts of user data. The objective is to provide targeted services for the users using marketing methods, creating a business drive and a consumer benefit. A case from the US retailer Target shows how this can quickly go wrong. After mining the purchasing habits of a customer, Target started to send her pregnancy-related communications. In turned out that the customer was in fact a teenager who was trying to keep her pregnancy private.

The ethical issues coming out of this were two-fold. To get the data on her behaviour, Target must have been using IoT devices to monitor credit-card activity and see the products she was buying. Whilst the law in most countries now insists that consumers have the opportunity to opt-out of such data collection, it isn’t instantly apparent to the everyday shopper. Also, with most loyalty related schemes, you are automatically opting-in for your data to be collated in return for what you consider to be benefits. Advance IoT technology then allows for you to be sent personalised offers based on your purchasing history. Secondly, Target decided to take it upon itself to send personalised information about a major life event that the customer had not informed the firm about. It should not be the firm’s right to then use that as a targeting method. In fact, this case was also related to a teenager, these are young vulnerable people targeted by large organisations without their consent – raising major privacy concerns and other concerns around the impact on mental health.

IoT can easily cross ethical guidelines. If the Amazon Echo knows when we say “Alexa”, surely it knows when we say other things. Even if seemingly innocuous information like when we make a coffee is leaked, it could still give information that is personal; a hacker knows we make coffee at a certain time every day and starts tracking us if this changes. Unintentionally, IoT could profile data that impacts privacy and this puts the technology into an ethical dilemma.

The Tech Giants

Amnesty International has claimed that the data collected by tech giants such as Google and Facebook that help to serve us targeted advertising, is an invasion of human rights. It says that you are almost forced to give data to these companies because they have become so synonymous with everything we do. Every time we talk to the search engine, it helps them to know more and more about us. The Amnesty International report makes an extensive and strong case. It says we need to challenge the idea that such intrusive data collection is necessary.

The Future of New Technology and Ethics

In November 2019, Australia released a code of ethics to govern the use of IoT devices. The code consists of 13 principles which include:

  • No duplication of weak passwords

  • Implementation of a vulnerability disclosure

  • Service developers and app providers must have a public point of contact

  • Software and firmware must be kept updated

The ethical principles align and build upon guidance from other countries such as the UK who have also been firmly laying down the law. The EU has also developed a code of ethics. California has passed an IoT security law that came into force in January 2020.

There is a recognition that IoT will be the driving force behind innovation, but we must focus on security and privacy now. This is the only way to gain enough trust from consumers and investors.

Crayon, a partner for your IoT & AI journey
At Crayon, AI is part of our core business. We offer advice and guidance from world-class experts who have been part of AI projects across multiple industries and have hands-on experience in implementing AI technologies to address real-life business problems. Crayon has invested significantly in taking its AI practice from delivering proof-of-concepts to developing and implementing real-life applications, and we are proud to have been recognised by Microsoft as its global AI and Machine Learning Partner of the Year. Crayon continues to expand its influence in emerging technologies by establishing several AI centre-of-excellence hubs in various geographical locations [including Warsaw?].

More in Digitalisation:

How digitalisation will boost the customer experience

By Rafał Górski, Automation & Rapid Solutions Lead, and Konrad Gaponiuk, Senior Consultant, Business Advisory KPMG in Poland.

 

In a highly competitive market, companies are trying to understand why customers prefer certain brands, staying loyal to them and recommend them – especially when products or services of different brands are comparable.

Silent cyber

By Willis Towers Watson Polska

 

The concept of ‘silent cyber’ presents a number of problems for the insurance market, but arguably the most significant one is that of risk accumulation. Risk accumulation for cyber as a line of business is already an issue for insurers and reinsurers. However, it is potentially dwarfed by that of cyber as a peril across multiple lines.

Glocalisation – a niche for growth

by Guy Leclercq, CEO of Deveho Consulting Group

 

Deveho Consulting Group is a Sage certified partner, integrating Sage’s X3 enterprise resource planning (ERP) platform. Founded in France in 2009, the firm has grown into a business that distributes the Sage solution in the cloud. Its particular specialisation is in cross-jurisdiction implementations.

eCommerce and ERP – made for one another

A new generation of consumers is entering the market – the ‘hypermedia generation’ for whom eCommerce is a native purchasing environment. They like to have a choice, and that goes for the transaction model as well.