Dutch Digital Design
sharing the best
interactive work from
the Netherlands

Submit case

Jump to articles

Sleek and futuristic e-shopping experience for fonts-of-our-time foundry

Bézier

by

Bézier

From brand system to informative content and beautiful digital design

Zentry

by

Zentry

Aesthetically captivating, smoothly built. A clean digital club experience

Radio Radio

by

Radio Radio

Creating digital presence with bold, no code immersiveness

Ask Phill & Analogue Agency

by

Ask Phill & Analogue Agency

Sculpting a movement for morally ambitious firestarters

Case: The School for Moral Ambition

by

Case: The School for Moral Ambition

From physical card to a sustainable, immersive digital experience

Nationale Bioscoopbon

by

Nationale Bioscoopbon

Next level immersiveness to create digital stand out within urban design

Studio D outstanding online presence

by

Studio D outstanding online presence

A stylish digital amalgamation of fashion, gaming & anime culture

ark8.net

by

ark8.net

Digitally sailing through Gehry's Walt Disney Concert Hall

Sculpting Harmony

by

Sculpting Harmony

Putting biotechnology mixed with lifestyle and fashion at the forefront

Normal Phenomena of Life

by

Normal Phenomena of Life
show all cases

Dutch Digital Design.
Stories. News. Events.

Jump to cases

June Park: driven to create user experiences with societal impact

Dutch Digital Design curator: June Park from Fabrique

Interview

Dutch Digital Design curator: June Park from Fabrique

Introducing Morrow: change for good, hear the youth

Partner in the Spotlight: Morrow

Interview

Partner in the Spotlight: Morrow

Kamiel Meijers from 51North. Making the digital journey tangible.

Kamiel Meijers - Dutch Digital Design curator

Interview

Kamiel Meijers - Dutch Digital Design curator

Meet Merlin. What makes their work magical. Imagine. Code. Magic

Partner in the spotlight: Merlin Studio

Interview

Partner in the spotlight: Merlin Studio

Who's in charge of making AI more socially responsible?

AI and social responsibility. What our partners say.

Thought Leadership

AI and social responsibility. What our partners say.

Your Majesty: about branding and uniting the curious

Partner in the Spotlight: Your Majesty

Interview

Partner in the Spotlight: Your Majesty

The impact of AI within the creative industry. What our partners say

The impact of AI within the creative industry - part I

Thought Leadership

The impact of AI within the creative industry - part I

Margot Gabel: passionate about connecting digital design with emotions

Margot Gabel Build in Amsterdam & Dutch Digital Design Curator

Interview

Margot Gabel Build in Amsterdam & Dutch Digital Design Curator

Christian Mezöfi from Dentsu Creative: loves detail and 3D design

Christian Mezöfi Dentsu Creative & Dutch Digital Design curator

Interview

Christian Mezöfi Dentsu Creative & Dutch Digital Design curator

Welcome ACE, Cut the Code, DotControl, Lava and Merlin Studio

welcome to five new partners

News

welcome to five new partners
show all articles

Who's in charge of making AI more socially responsible?

Thought Leadership


AI what our partners say

Artificial Intelligence (AI). Everyone is talking about it. From governments and big tech to people like you and me - all over the world. But how and why do we use it? Does AI make the world a better place? A world that is more efficiently run, because AI is saving us time - with our work and perhaps the way we live our lives. But do we really know what it does and how it works? 

We sat around the (online) table with some of our Dutch Digital Design partners. In this second round we were joined by Niels de Keizer from FONK, Mees Rutten from Merlin, Geert Eichhorn from Media.Monks, Kasper Kuijpers from Your Majesty - and Sandjai Bhulai, professor of business analytics, and Jesper Slik, Lead VU-Pon AI Lab*. Two statements, lots of opinions.

Statement I: AI makes us work more efficiently.
Statement II: AI can make the world a better place. Who is responsible?

Together we talked about what these statements mean to us, as well as the challenges AI throws at us, and how we can address these. Are we, creators, also responsible for what and how AI creates? Are we - together with AI - able to make a positive impact?

*VU-Pon AI Lab: Pon and VU (Vrije Universiteit Amsterdam) are embarking on a mission to address societal challenges through the power of artificial intelligence. Fostering collaboration between industry, academia and other partners. Unlocking innovative solutions to make a lasting impact, and leveraging these technologies for Pon.

How creators think about AI

Sharing these two statements with the group initiated an interesting and insightful conversation. Who is in charge of ensuring that AI is used in a responsible way? Do we close our eyes, and hold the big tech companies responsible? And what is our main focus at the moment concerning AI? Are we just using AI to make our way of working more efficient? Our round table participants were pondering about these topics, and also threw some more food for thought on the table.

Niels (FONK) starts off with saying that he believes that everyone should be held responsible. And that we definitely need to think about ethics. Because AI is here to stay whether we like it or not. Therefore, we need to invest in the future by discovering, reading, learning and doing as much as we can about AI.

Mees (Merlin) adds that we need to improve this invention, and take away the fear by making people more aware of all its opportunities, but also its pitfalls. Making it more transparent. Because AI is made to make work/life easier, and our way of working more efficient. And in the long run it might make our jobs more interesting, or even create more interesting jobs. Kasper (Your Majesty), however, is a little more concerned. He already sees jobs shifting - with a focus on earning money. A representation of today’s world. AI systems are also programmed that way - based on existing data.

On a note of creating more transparency, Geert (Media.Monks) - an innovation director in charge of research and development in this area - shares his concerns about AI on a business level. He believes that in five years’ time the workforce in marketing will be reduced by 40%. Perhaps more scalable and efficient, but also offering a different level of quality: we will be looking at bulk advertising versus crafted/handmade content. He believes that we should not compete with AI tools, but embrace them, in order to fully understand and work with them. He also points out that most AI systems are not yet profitable, and that the big tech companies spend billions of $ on investment in AI. This is not yet feasible for smaller companies and agencies.

Considering all of the above, all members of our round table feel that creators should be more socially responsible for the output AI creates, how AI creates. And that we should not just focus on efficiency and optimisation. However, ethical issues/topics are tricky to manage. And who is responsible for that? Governments or all of us? Currently those who have the most money, build and control AI systems. Like Meta and Nvidia’s LLAMAs. They still determine the status quo. But perhaps we, in the creative industry, can manage the output with the knowledge we have of AI.

Geert truly believes that when the public is tired of the same generated images, there will be a demand again for handcrafted materials. Sandjai (Pon AI Lab) is not so sure, and wonders whether a new generation will get used to this quality, and that this will become the new norm. Because, as mentioned before, values in today’s world are shifting. People who do not really belong in the creative sector, will see a chance to create purely to earn money. They see a gap in the market. Kasper doesn’t think that this is all bad, as it is also making the creative industry more accessible. For example, people who cannot draw, but have an eye for design and a good amount of imagination, can now also be part of our industry. This would be a new type of creative direction, and not necessarily a bad move, according to Kasper. Breaking down existing boundaries.

Does this come with a risk? Because this new structure is still unknown. And who is going to monitor this? Niels fears that consumers will drown in even more digital noise, and that it will become increasingly hard for brands to distinguish themselves.

AI - Geert Eichhorn - Media.Monks
AI by Geert Eichhorn - Media.Monks

Is AI smarter than humans?

As Geert points out: these large image models are not owned or created by Media.Monks. However, they are able to fine-tune the input, and then observe and monitor closely.

Niels agrees. They are also trying to fine-tune the input as they do not have the time, money or data to develop their own AI models. They research which prompts and context they need in order to create the right output. This involves a lot of exploration, and also the need to attract new talent: prompt engineers and/or linguists.

Large language models use and train the data that has been put in. It is a biased impression of today’s society. Unfortunately, this often means mostly patriarchal output and an abundance of white males. Creators of AI models are aware of this, but also an increasing number of the world population, and this pushes these manufacturers to take responsibility to change this bias. For example, by including a prompting tool within their model. It also asks for guidelines to indicate whether something is right or wrong, what is allowed or not. Mees gives another example where guidance is desperately needed: body-tracking apps do not recognise all different types of bodies, and, therefore, create bias when using these types of apps.

Everyone around the (online) table agrees that it is our joint responsibility to raise awareness around the subject of AI. To avoid bias and create more transparency around this technology. In order to enable everyone using AI to make better choices. Kasper adds that agencies should also do this with their clients - from onboarding onwards: thinking, exploring and making it happen together. He also suggests that within an open source environment, techies all over the world can brainstorm/discuss how AI works, and how to make it more socially responsible, together with AI engineers. In order to obtain different views - non-biased views.

AI - FONK
AI by FONK

Fine-tuning data for less bias

Jesper (Pon AI Lab) reiterates that AI only sees data, and is not biased. The data is biased. You can fine-tune the data to change the output. Therefore, as mentioned before, the responsibility starts with the makers of AI tools. But they have no guidelines to work with, and their clients tend to want output according to their wishes. This is dangerous and biased territory. And rules need to be introduced. Government policies and guidelines are needed, in order to determine and introduce standards. However, everyone has a role to play when putting together these policies: government, makers and clients.

Sandjai adds that this responsibility is layered:

  • Makers need to avoid bias, and, exclude discrimination in this way
  • Users need to know how to use tools properly
  • Government needs to regulate well

A European AI Act was passed in March 2024. With a very broad approach, and a classification by high and low impact. By mid-2026 all regulations should be in place.

A short AI tutorial by Sandjai: an AI model is driven by a large amount of data, and is, therefore, mostly generic. It needs to learn everything, and represents the norm. Unless it is trained for a specific purpose, with specific data. You will need specialists to do this, in order to make the output more specific.

Jesper illustrates this with an example of generative AI in supply chains. How operating systems are also biased in relation to demand and pricing. This is very difficult to spot in an AI automated system, and needs to be monitored closely in order to prevent this happening. An example of this type of bias: white Barbie dolls are in higher demand, so the price automatically goes down. A coloured Barbie doll is in lower demand and, therefore, the automated supply chain system will increase the price of this doll automatically.

Conclusion: as long as we are all becoming more and more aware of the process that entails AI, we should be able to prevent it or at least detect it being biased. Therefore, the responsibility lies with everyone.

AI - Kasper - Your Majesty
AI still life by Kasper Kuijpers - Your Majesty

Raising AI awareness

Throughout this conversation it is becoming clear that not only can we - as creators - be in charge of making the output of AI more socially responsible, but that we should also raise more awareness amongst its users, and especially for those users that do not know all the ins and outs of digital technology. Education will be an important part of this. For clients and the general public.

An interesting fact: AI tools are mostly trained in the United States. A country with a different culture and values to Europe and the rest of the world. Therefore, the output might not be suitable for every country. Another example: the NY Times got sued for using text generated through Open AI within an article: a copyright issue. Whereas in Japan there are no restrictions at all with regards to what material is used to train an AI tool. There is no copyright. All of the above is important to consider when working with AI. Different strokes for different folks.

Geert and most of our panel believe that governments are moving too slowly. Regulations are soon outdated as technology is moving (too) fast. Therefore, we as an industry can be responsible for ensuring that we monitor and create the right input. Not without risk, because doing it ourselves without the right knowledge also means a high probability of generating bias.

So, technology moves too fast for governments, and definitely too fast for consumers to understand, according to Mees. He points out that despite of the this, consumers are using AI, and wonders if this - despite the pitfalls - will eventually just be the norm, and we will all get used to it. Which perhaps makes it a job for the creative industry and their clients to raise awareness right now about these new technologies, including its advantages and disadvantages, create transparency and educate the public.

Kasper suggests a possible partnership between the Dutch Digital Design foundation - as this is a close-knit community of creative digital agencies who know a thing or two about technology - and SIRE - a Dutch independent foundation for raising awareness amongst the Dutch public for social issues and topics. Sandjai responds that there already is a Dutch AI Coalition (Nederlandse AI Coalitie) that offers AI courses. However, the participants are mostly tech savvy, and not the people who need the education. This probably means that it is not offered in such a way that appeals to an ordinary audience. How do we reach those people? This is definitely something to think about.

Niels is thinking about a shift in jobs within agencies. To look for people like prompt engineers and other experts in AI technology. Those who can drive models properly, and are good with language and technology. So the right questions (prompts) are asked. Linguists and copy writers come to mind here. He also mentions collaborations with higher education institutions, universities and scientists. Geert adds that to get the most out of AI models we need to explore all the technical options that are possible at our end, like creating a data set from scratch and training this dataset, as well as creating correct prompts. To make AI tools more context-aware and user-friendly.

Sandjai is of the opinion that the actual AI models are not going to be adjusted as much anymore. Therefore, he believes that there needs to be more focus on the work flow around the model - how we can fine-tune the data to enrich the model, and to develop better and richer prompts.

To achieve any of this, collaboration between different fields of expertise is important - from AI engineers and scientists to linguists, prompt engineers and agencies.

AI - Merlin
AI by Merlin

The all-important question: can AI make the world a better place?

We can conclude that there is not one definitive answer. Niels, Mees, Geert, Kasper, Sandjai and Jesper all see the opportunities AI brings. When used correctly, a better world might mean a more efficient world and, ultimately, a less biased world. However, the stakes are high.

Not everything that is possible, needs to be done, are the wise words from Sandjai. He gives us a few examples of what positive impact AI can bring us. Through AI optimisation of the supply chain by certain airlines, wastage of flight meals is reduced. This process can be used in different industries to reduce waste, and in this way AI can help the world to be more sustainable. A definite positive impact. Or for example in Ethiopia, where AI helps optimise the fertility of the land: AI-powered soil testing amplifies farmer productivity and climate resilience. There are many use cases where AI shows its positive impact. It is down to us to use it responsibly.

Despite the overall positivity, Geert also wants to highlight the pitfalls so we stay aware. Technology is becoming more and more accessible (not user-friendly or transparent), and this also makes it more accessible to ‘bad actors’. Mees believes that AI holds a magnifying glass on communication: there is a lot more disinformation. However, we also receive good information faster.

Overall, we are not really prepared for what is coming. Although Geert feels that the Netherlands are ahead in understanding and working with AI, and aware of the impact it has. Therefore, we could play an important role, and take our responsibility to prepare and plan for this transformation. There needs to be a strategy on how to move forward.

Jesper concludes that we need to seize the opportunities, but that it is up to us humans what we are going to do with it. He states that humans are super-biased. So, perhaps we can turn this around, and use technology to create less bias = a better world. That is definitely one to think about!

Yet another insightful discussion - moderated and written up by our lead editor Nicole Pickett-Groen. Creating some momentum in thinking about AI: what it means to the world and the creative industry, and to see how we can contribute in using AI in a responsible and transparent way. One where it was generally felt that we - as a digital creative industry - definitely have a role to play in making AI more transparent by educating our clients and the general public. A discussion that we should definitely continue. Here in the Dutch Digital Design community, but also with our clients, our fellow-designers and partners in other industries.