From Screens to Systems: Why Quality Content Still Matters in the Age of AI?

From Screens to Systems: Why Quality Content Still Matters in the Age of AI?

Introduction: From Screens to Systems

When I was a child, there was a block of children’s programming on our national TV, which meant “limited screen time” by default. Years later, I became a children’s media and quality content expert, and I spent years researching the impact of screen time and what makes content truly enjoyable and beneficial and created content accordingly. People often asked me, “How much screen time should kids have?” I would often cite institutions like the American Academy of Pediatrics (AAP), which for years recommended no screen time for children under two and “For children aged 2 to 5 years, screen use should be limited to one hour per day of high-quality programming, viewed together with a caregiver.” (American Academy of Pediatrics, 2016 Guidelines

But we also need to acknowledge where we are now and the conditions that are evolving. “Today’s children spend more time with media than they do at school.” (Plugged In, Valkenburg & Piotrowski, p. 1)

Today, everywhere we look, we often see people who refer to children as digital natives as if they are savvy, connected, and in control. But the reality is more complex. While children may seem more autonomous in how they access and engage with technology, they are also more vulnerable than ever to the systems we build around them with technology. They are growing up in a world where the boundaries between entertainment, education, and influence are increasingly blurred, sometimes overlapping each other.

Children don’t grow up with technologies. They grow up with what those technologies deliver: stories, games, prompts, feedback, and also silence. For decades, content has been the invisible structure shaping how children learn, feel, and behave. Yet it hasn’t always received the recognition it deserves. Now, that structure is changing.

In the age of AI, children are no longer just watching or playing. They are interacting with systems that listen, prompt, and respond. They are engaging with technologies that make suggestions, offer reminders, and give choices. And in this shift-from screens to systems, from episodes to ecosystems-one thing remains constant: the importance of what we put inside these interactions. The content we curate.

This article isn’t about fearing AI or chasing the hype around it. It’s about why, as designers, educators, and technologists working with children, we must return to the quiet core of our practice: the content we shape, the tone we choose, and the intentions we embed.

What does “quality content” mean now, when it’s no longer just broadcasted or tapped, but woven into an ongoing conversation between a child and a system? Why does it still matter? 

What Is Quality Content?

The term “quality content” is often used, but not always defined. When I mention “quality content,” people sometimes assume I’m referring only to good animation, polished visuals, or well-written scripts. Of course, these are very important. But high production value alone does not make content “quality,” especially in the context of children’s media. The surface can be impressive while the core is hollow.

What we rarely see, despite how often the phrase is used, is a clear, shared definition of what quality content actually means when we are speaking about children. And that definition matters, because it shapes not only what we create, but also what we prioritize, fund, regulate, and ultimately trust.

So, what is it really? It is not just about being ‘educational’ or ‘entertaining’. It is about intention, design, and developmental impact. In children’s media, quality content means something that supports learning, growth, and wellbeing in a way that is age-appropriate, engaging, meaningful, and respectful to kids.

For many years, my work has focused on what makes content truly “quality” from the perspective of child development and children’s media studies. I often refer to the frameworks developed by Dr. Dafna Lemish, Dr. Barbara Kolucki, Dr. Shalom M. Fisch, and many others, but also rely heavily on practice on what children respond to and enjoy, what parents trust, and what sustains attention in a healthy way.

A few foundational features of quality content include:

  • Educational effectiveness: Grounded in clear, age-appropriate learning goals based on developmental research. Here, Dr. Shalom Fisch’s capacity model emphasizes integrating educational content within engaging stories to maximize retention and comprehension (Fisch, 2004). He also highlights the importance of educational material “being central to the plotline” for children to retain and apply what they learn (Fisch, 2017).
  • Emotional safety: Content should model empathy, kindness, and emotional regulation. Research has consistently shown that emotionally safe media—particularly when it models empathy, conflict resolution, and prosocial behavior—can significantly enhance children’s social-emotional competence. This, in turn, supports both cognitive development and overall mental health. For example, children exposed to prosocial programming have demonstrated better interpersonal skills, self-regulation, and emotional understanding (Fisch, 2004; Valkenburg & Piotrowski, 2017) 
  • Relevance and representation: Children need to see themselves and others reflected authentically both in terms of individual self-expression and cultural identity. According to Patricia Hidalgo of the BBC, “Children learn from what they see and experience. It’s important to expose them to other cultures to be accepting of them, but children also need to see themselves and others reflected on screen” (Hidalgo, 2024, as cited in Culver, The Quality Question). Children’s content should uplift local languages, customs, and stories. When children see their culture represented with dignity, it builds identity and strengthens community bonds.
  • Developmental fit: Quality media respects children’s developmental stages in terms of pacing, structure, story complexity, and language. It meets them where they are and grows with them.
  • Positive engagement: Joy, curiosity, humor, and creativity are essential for sustained engagement. Playfulness, when intentional, is not a distraction on the contrary, it’s a driver of learning.
  • Play-based learning: As emphasized in Harvard’s Pedagogy of Play, play allows children to lead their own learning, test ideas, and explore uncertainty. Play is not separate from learning, it is how young children best engage with ideas and people. Play is at the heart of quality content. 

These elements are not new but in an age of algorithmic systems and AI-generated interactions, they are essential. As Dr. Barbara Kolucki and Dr. Dafna Lemish emphasize in their UNICEF guide, communication with children is not decorative—it is ethical design infrastructure and must be treated as such (Kolucki & Lemish, 2011). 

To expand on this, I often turn to the communication principles developed in the UNICEF guide “Communicating with Children”. These principles offer a practical and ethical foundation for creating media and communication that not only inform but also nurture:

  • Principle 1: Age-appropriate and child-friendly communication Use language, visuals, music, and humor that are understandable and relatable to the child’s developmental stage.
  • Principle 2: Holistic design See the child as a whole. Address their emotional, cognitive, and social development together. Include positive role models and create “safe spaces” in narrative worlds.
  • Principle 3: Positive and strengths-based Focus on what children can do. Encourage growth, resilience, and agency by using affirming messaging and character arcs.
  • Principle 4: Inclusive and equitable Reflect the dignity of all children. Avoid stereotypes, promote diversity, and offer representation that is authentic and empowering.

These principles are not just best practice, they are essential ones. Especially as AI becomes more embedded in children’s lives, as the interactions grow, how we design these interactions matters deeply. AI systems are beginning to “speak” to children. They prompt them, guide them, sometimes even create “emotional reactions” or challenge them.

That’s why communication principles, long considered a soft skill in content development, must now serve as hard infrastructure embedded inside. Because when we design content that honors a child’s perspective, context, and voice, we’re not just making media, we’re creating a meaningful experience.

Quality content for children doesn’t look like formal education but that doesn’t make it less impactful. Through story, song, characters, and repetition, it quietly provides the scaffolding children need to grow by supporting their language, emotional expression, social understanding, and sense of self. What might seem simple on the surface is often doing deep, lasting work underneath.

When done well, this content becomes part of daily routines. It helps shape how children see themselves, how they relate to others, and how they understand the world around them. And its reach is remarkable, it is present in living rooms, classrooms, refugee camps, and devices in the hands of caregivers.

Around the world, we’ve seen powerful examples of this kind of media. Ahlan Simsim, the Arabic-language Sesame Workshop series, reaches millions of children across the Middle East and North Africa with a dual mission: to support social-emotional development and to help children cope with adversity and displacement. Its culturally adapted characters and trauma-sensitive episodes offer not just entertainment, but healing and routine in times of instability.

Similarly, Daniel Tiger’s Neighborhood, created by Angela Santomero and based on the legacy of Mister Rogers, brings socio-emotional coaching to young viewers through calm, affirming songs and scripts. The structure of every episode models empathy, emotional vocabulary, and practical strategies. A lot of would remember the song: “When you feel so mad that you want to roar, take a deep breath and count to four.” Its global popularity proves that gentle, predictable, emotionally intelligent content resonates deeply with children and caregivers alike. Takalani Sesame in South Africa, Ahlan Simsim across the MENA region, and Blue’s Clues in the United States are just a few examples where quality content has reached millions of children and helped build critical foundational skills. These programs prove that when content is carefully designed and/or localized, it can support cognitive, emotional, and social development on a global scale.

To understand how we apply these principles in practice, I want to share a bit about my own work. As the co-founder of Mako Kids, a children-first media company based in Istanbul, I’ve spent years developing and producing quality content for young audiences across diverse settings. My work often focuses on empowerment, inclusion, empathy, and supporting children wherever they are in their development. I have had the privilege of collaborating with UNICEF, the European Union, national ministries, and major national and international broadcasters on a range of children’s media and impact-driven projects.

At Mako Kids, our mission has always been to create engaging, research-backed media that reflects the developmental needs and cultural contexts of children. We believe in designing content that not only entertains but also empowers. This approach has shaped some of our most meaningful productions, especially those built through collaboration with international partners and grounded in rigorous child development frameworks.

Two recent examples from this work highlight how quality content can be implemented and scaled in ways that reach and impact real children:

  • Kozalak Preschool, created under the “Increasing Quality of and Access to Early Childhood Education Services” Project (with UNICEF, the EU, and the Republic of Türkiye), offers a vivid case of content designed not only for learning but for modeling sustainable behavior and emotional growth. The 39-episode series is rooted in sustainable development principles and inclusive character design, supporting both school-readiness, nature literacy and SDGs.
  • Fafa and Friends, a preschool series we create for Baraem Channel, built around play-based learning, decision-making, and friendship, presents learning moments through child-led interactions. Each episode takes place on imaginative play islands, and the format centers children’s ideas and agency. The series was designed to support not only developmental milestones but also soft skills like empathy, cooperation, and resilience.

In both examples, we began with the core values of quality conte and then expanded them into entire content ecosystems. This is where media moves beyond “watch-and-learn” into something much more: a participatory, ethical, and joyful space for children’s development.

In the next section, we’ll explore why these features matter more than ever in the age of AI, and how the definition of quality must evolve to meet the demands of content embedded in technology-driven interactions that are no longer just watched but experienced, guided, and sometimes even initiated by intelligent systems.

Why Quality Still Matters, Especially Now

As AI becomes increasingly embedded in children’s digital lives, the stakes for quality content have never been higher. We are no longer just creating stories that children watch. We are creating systems that respond, guide, and sometimes even prompt behavior. We are creating tools that directly speak with the child.

This shift from linear storytelling to responsive interaction brings new responsibility. When a child asks a question to a voice assistant, or when a learning app directs them toward a task, the line between ‘content’ and ‘relationship’ begins to fade. We are no longer just producing media, we are shaping experiences that feel personal, interactive, and at times, even emotional at a cost.

And that’s precisely why the core principles of quality content must not only endure, they must evolve and thrive, guided with ethical principles and a deep understanding and knowledge of “childhood”. Content embedded in AI systems must know how to communicate clearly and respectfully, speaking in ways that are developmentally appropriate, emotionally safe, and culturally and individually aware. But just as importantly, it must understand its own boundaries.

AI systems should not mimic emotional intimacy, simulate friendship, or blur the lines between tool and companion. They are not caregivers, mentors, or peers. They are designed systems structured to support learning and growth, not to replace human connection.

Research by Stefania Druga shows that children’s interpretations of AI tools vary significantly by socioeconomic background. Children in higher SES communities often approach smart agents like Alexa or Cozmo with curiosity and experimentation, whereas children with less exposure or fewer resources often interpret these same systems as more authoritative, emotionally capable, or “human” (Druga et al., 2019, p. 1). In focus group observations, some children believed Cozmo had feelings simply because of its expressive animations. One child noted, “Cozmo has feelings because of animations that reflect each feeling” (Druga et al., 2019, p. 4) he boundary between technology and emotional intelligence can be easily misunderstood by children, especially when the design encourages anthropomorphism.

This brings us to a key developmental insight. As Lev Vygotsky theorized, learning happens most effectively in the Zone of Proximal Development: the space between what a child can do independently and what they can do with guidance. Ethical AI tools should aim to scaffold learning and agency within that zone, not bypass it (Vygotsky, 1978, as cited in Valkenburg & Piotrowski, 2017, p. 183)

This is echoed in the UNESCO AI Competency Framework for Students, which advocates for AI tools that empower learners to be responsible, creative, and critical thinkers, not passive consumers. It stresses the need for AI to support students in becoming “co-creators of AI tools, not only passive prompters” (Miao, 2024, p. 7-15) 

In my interviews for the KidsAI Journal, I’ve spoken with leading voices in the field. Dr. Sonia Tiwari emphasized the value of co-design where AI serves as an inspiration tool, not a replacement for human creativity. “Almost no one followed the generative designs as-is, they used them as inspiration,” she said (Yiğit & Tiwari, 2023) 

In another conversation, Dr. Mathilde Cerioli stressed the risks of over-assistance and warned that systems which bypass children’s developmental readiness can erode cognitive safety (Yiğit & Cerioli, 2023).

And in my conversation with play futurist Yeşim Kunter, she reflected on how play isn’t just about learning, it’s about emotional processing, risk-taking, and identity-building. “We need systems that let children express—not just respond,” she said, noting that playful emotional design is essential to help children feel safe while still exploring challenge and creativity (Yiğit & Kunter, 2024).

When we design content for AI, we must resist the urge to anthropomorphize or entertain at the cost of clarity. Instead, we must create systems that serve as scaffolds thoughtfully guiding children through decisions, reflection, and play without overstepping into roles that belong to parents, educators, and peers.

Designing for Boundaries: What We’ve Learned at KidsAI

When designing AI systems for children, one of the most important lessons is also the simplest: children take technology and technological tools seriously. They form attachments, assign feelings, and often can’t distinguish between a helpful system and a sentient one—especially if the design encourages that belief.

That’s why AI tools for children must be honest. They shouldn’t simulate friendship or pretend to be something they’re not. They should be clear about their purpose, limited in their claims, and transparent in their behavior.

As the co-founder of KidsAI, we’ve spent the past years building a company grounded in a core belief: that children deserve technology built for them not just adapted for them.

KidsAI is a child-first AI ecosystem. We develop ethical, age-appropriate AI assistants and media content that help children understand and safely engage with intelligent technologies. Our work spans research, product development, and global advocacy. We collaborate with educators, child development experts, and policymakers to ensure every tool we create is not only useful but safe, responsible, transparent, beneficial, and ethical.

One part of our vision is Olly: a screen-light, task-focused AI assistant designed to support children aged 5 to 12 in developing real-world life skills like planning, decision-making, creative thinking, and reflection.

In shaping our approach, we’ve carefully reviewed global studies on how children perceive AI, and we are conducting research of our own, including surveys, interviews, and co-design sessions across diverse communities. These insights are deepening our understanding of how children interpret systems not only as tools, but often as social actors.

Children are quick to project intention or emotion onto AI, especially if the system doesn’t make its limits clear.

This isn’t just a design insight. It’s a content insight.

Because when we design a system for children, we are not only programming its behavior, we are writing its voice. Its tone. Its pace. Its personality. And that “content”, the words it says, the prompts it offers, the way it gives feedback, teaches children how to respond. How to think. Sometimes, even how to talk. And also showcase AI literacy at its core.

That’s why we approach every sentence Olly speaks as part of a larger framework: not just what it says, but what it models. We ask: Does this prompt promote agency? Does this feedback respect the child’s autonomy? Are we showing what thoughtful, emotionally grounded communication sounds like?

This includes everything from:

  • avoiding praise that sounds personal (“I’m proud of you”),
  • to replacing open-ended prompts with scaffolded ones (“Would you like to try again or do something else?”),
  • to clearly ending interactions (“Let’s close our notebook now—we can come back to it tomorrow.”)

Every sentence is tested not just for clarity, but for intention. Because the AI doesn’t just prompt activity. It models how to speak to others and how to speak to oneself.

For children still learning how to navigate emotion, reflection, and social cues, this matters deeply. If the content embedded in these systems is careless, overfamiliar, or vague, the system becomes confusing. If the content is consistent, intentional, and cleat, it has the ability to scaffold.

So we don’t think of content as the “script” of the system. We think of it as its value system.

And in this next chapter of childhood where screens give way to systems, that may be the most important thing we build.

Conclusion: What We Build, Builds Back

As we step into a new era of AI and child interaction, we must recognize that we are not just designing tools, we are shaping ecosystems. AI does not exist in isolation. It lives within a broader sociotechnical and psychosocial landscape shaped by relationships, behavior, culture, values, and language hence we call it “sociopyschotechnical”. And children, in particular, will experience AI not as a concept, but as something that speaks, prompts, responds, and ultimately teaches.

This is why content matters more than ever. Not as decoration, but as architecture. Not just as a delivery mechanism, but as the internal logic of what a child experiences through interaction. Content is what will carry our values into every response, every prompt, every pause.

When we design content for AI, we must resist the urge to anthropomorphize or entertain at the cost of clarity. Children don’t need systems that mimics to perform intimacy. They need systems that are designed to understand their developmental needs and support their thinking without simulating emotional connection or pretending to care.

This is not about minimalism. It’s about ethics. It’s about restraint. It’s about recognizing that, in the absence of clarity, children will make meaning on their own and not always in the ways we expect or intend.

In this new landscape, the most ethical and effective content is the kind that understands what to say, how to say it, and when to step back.

If we want children to grow up with intelligent systems they can trust, we must earn that trust—not through charm or convenience, but through design grounded in developmental insight, cultural sensitivity, and emotional responsibility.

Because children won’t just grow up using AI, they’ll grow up interpreting it. Responding to it. Learning from it.

And what we embed now, into the voices, rhythms, and intentions of these systems, will become part of that future.

Let’s make it something they can rely on—not just to answer their questions, but to protect their sense of self, their relationships, and their right to grow up safely, ethically, and meaningfully in an intelligent world.

About the Author

Evren Yiğit is a children’s media expert and the Co-Founder of Mako Kids and KidsAI. She specializes in designing ethical, child-centered content across media and AI platforms. With a background in comparative literature, early childhood development, and business, she has created and produced content for major broadcasters, cultural institutions, and international organizations including UNICEF and the EU.

Evren holds degrees from Boğaziçi University, Istanbul Bilgi University, and the Netherlands Institute of Higher Education. She is also the author of several books for children and adults, and a speaker on ethical AI, AI literacy, quality content, and the future of children’s engagement with technology.

Selected Bibliography

American Academy of Pediatrics. (2016). “Media and Young Minds. Pediatrics”, 138(5), e20162591. Retrieved May 17, 2025, from https://doi.org/10.1542/peds.2016-2591 

Culver, K. (Ed.). (2024). The Quality Question: Children’s Media and the Public Interest in a Platform Era. Center for Media and Information Literacy.

Druga, S., Vu, S. T., Likhith, E., & Qiu, T. (2019). Inclusive AI Literacy for Kids Around the World. In Proceedings of the 2019 ACM FabLearn Conference (pp. 104–111). ACM. Retrieved May 17, 2025, from https://doi.org/10.1145/3311890.3311904

Fisch, S. M. (2004). Children’s Learning from Educational Television: Sesame Street and Beyond. Mahwah, NJ: Lawrence Erlbaum Associates.

Fisch, S. M. (2008). What’s on the Plotline? The Role of Narrative in Children’s Learning from Educational Television. Journal of Children and Media, 2(1), 1–8.

Fisch, S. M. (2017). “Programming for Preschoolers: What Makes for Effective Educational Television”. Retrieved May 17, 2025 from https://www.researchgate.net/publication/315519392_Programming_for_Preschoolers_What_Makes_for_Effective_Educational_Television

Global Ties for Children. (2023). “Ahlan Simsim Findings”. Retrieved May 21, 2023, from https://globaltiesforchildren.nyu.edu/as-findings 

Kolucki, B., & Lemish, D. (2011). “Communicating with Children: Principles and Practices to Nurture, Inspire, Excite, Educate and Heal”. UNICEF.

Lemish, Dafna (2007). Children and Television: A Global Perspective. Oxford: Blackwell Publishing.

Miao, F. (2024). AI Competency Framework for Students: A Tool to Promote Digital Literacy, Responsibility, and Inclusion in AI Education. Paris: UNESCO.

Project Zero & Pedagogy of Play (Harvard Graduate School of Education). (2021). Pedagogy of Play: Supporting Playful Learning in Classrooms and Schools. Cambridge, MA: Harvard University.

Valkenburg, P. M., & Piotrowski, J. T. (2017). Plugged In: How Media Attract and Affect Youth. Yale University Press.

Vygotsky, L. S. (1978). Mind in Society: The Development of Higher Psychological Processes. Cambridge, MA: Harvard University Press.

Yiğit, E., & Tiwari, S. (2024). “KidsAI Interview Series: Integrating AI into Early Childhood Learning”. KidsAI Journal. https://kidsai.io/journal

Yiğit, E., & Cerioli, M. (2024). “KidsAI Interview Series: Prioritizing Children’s Well-being and AI”. KidsAI Journal. https://kidsai.io/journal

Yiğit, E., & Kunter, Y. (2024). “KidsAI Interview Series: On the Future of Play and AI”. KidsAI Journal. https://kidsai.io/journal