This summer, I had the privilege of joining CyberForce in Istanbul as an AI Security Strategist intern.
It was an eight-week journey (July–September 2025) that combined technical deep dives, professional mentorship, and numerous personal lessons. Meanwhile, I was equally preparing to transition from Turkey to Germany for the second year of my Erasmus Mundus CYBERMACS master’s program.
What started as an intimidating challenge, to “design a security guidebook for AI systems”, became one of the most rewarding experiences I’ve had in my academic and professional path.
In this blog, I want to share that journey: the technical details, the real-life lessons, and the fun little moments that made it uniquely mine.
AI systems are no longer futuristic entities or something of a faraway future. They power everything from healthcare diagnostics to financial trading, autonomous vehicles, and even national security systems.
But like every technology, AI comes with risks:
Traditional cybersecurity frameworks often fail to address these scenarios. Therefore, AI security is becoming its own specialised domain, bridging machine learning risks with established cybersecurity practices.
That’s where my role came in.
At CyberForce, my mission was clear (and slightly terrifying at first😅):
“Create our company’s official AI Security Guidebook — a methodology and checklist that defines the steps to take and the risks to look for when testing AI systems.”
This meant I had to:
Basically, I was part strategist, part architect, and part hands-on builder. (I know right, it sounds as cool as it was 😁)
The framework I designed had three pillars:
My first two weeks were all about immersion: digging through resources like MITRE ATLAS, the OWASP LLM Top 10, NIST AI RMF, Google SAIF, CSA AI Control Matrix and other industry whitepapers, such as from KPMG and Microsoft.
Honestly, what I quickly realised is that AI security is not just cybersecurity with a twist.
It has its own world, requiring us to rethink assumptions about input/output validation, dependency trust, and adversarial behaviour.
By week three, I was deep in full-stack development mode.
Imagine writing over 6,000 lines of JavaScript, tweaking CSS, and trying to get export functionality to stop breaking… (spoiler: it did, more than once).
Copilot sometimes helped — and sometimes broke everything. At one point, I had to start over from backups.
Painful, but also a reminder of the importance of version control.
I learned to:
One highlight was our midpoint presentation to supervisors. They gave us sharp feedback, specifically for me, it was:
I left with a clear to-do list: streamline processes, add benchmarks, and make the framework production-ready.
It wasn’t all code and risk matrices. Some of my favourite memories came from small, human moments:
Before/after graphic of welcome screen design with logo animation
Beyond my day-to-day project work, I invested time in some certifications and structured learning to deepen my expertise.
Two milestones stand out:
A lot of my framework design came from digging into community and industry resources. They helped me with both the technical and governance aspects of my project.
Here are some that proved invaluable:
Core Frameworks & Standards
Courses & Trainings
Industry Insights
And a lot more like articles, white papers and others. Tools like ChatGPT and Claude became part of my research process. I used them to brainstorm improvements, reframe checklist logic, and sanity-check my methodology. Of course, they weren’t a substitute for industry frameworks like MITRE ATLAS or OWASP, but they were invaluable sparring partners, pushing me to think about “what if” scenarios or alternative approaches. Also, I often leaned on GitHub Copilot to speed up repetitive coding tasks or to suggest fixes when debugging export functionality issues.
Overall, these resources became my toolbox, guiding both the practical and governance layers of the AI Security Guidebook I developed.
While my framework may remain a proposal rather than an adopted product, my work added value by:
In short: I helped kickstart CyberForce’s conversation around AI security in a structured, tangible way. (I like to think of it this way, haha😅)
If there’s one thing this internship taught me, it’s that AI security is messy, fascinating, and very human. It requires technical precision and creative problem-solving. I learned to:
A personal note?
Some days I was tired, bored, even frustrated. Other days, I felt unstoppable. That’s the reality of real-world projects, and I guess it is what makes them valuable.
As I was wrapping up the internship, I was preparing to move to Germany for the second year of my Erasmus Mundus CYBERMACS program, starting October 1st.
I’ll continue exploring AI security, and I’m especially interested in how it intersects with space cybersecurity and governance for my thesis. (maybe maybe 😁)
Oh, and the project isn’t ending here — you can still explore it:
In case you want to check it out, my internship report
As I wrap up this chapter, I want to pause and express my gratitude. First, a heartfelt thanks to Ender Gezer, CTO of Cyberforce, for giving me this opportunity and trusting me to contribute to such a forward-looking project. His guidance as both a supervisor and mentor shaped my approach to AI security in ways I’ll carry forward for years.
I’m equally grateful to Tunahan Tekeoğlu, whose mentorship, patience, and feedback helped me refine my work, think critically, and keep pushing toward a production-ready framework.
A special thanks also to my fellow CYBERMACS colleagues (Adolfo and Danium - hearts to you 😊) who interned alongside me at Cyberforce, it made the journey more collaborative, more fun, and a lot more memorable. And of course, to the wider Cyberforce team and all the great people I met along the way — thank you for welcoming me, sharing your knowledge, and reminding me that cybersecurity is not just about technology, but also about people.
This internship was more than just a project. It was an experience of growth, teamwork, and discovery. Definitely, it is one I’ll always look back on with appreciation. ❤️
I believe internships are never just about the work but equally about growth, people, and moments you’ll remember.
For me, CyberForce was all of that: technical challenge, mentorship, community, and a push out of my comfort zone.
To anyone curious about AI security, or wondering if they should take the leap into a specialised, evolving field: do it.
You’ll learn more than you expect (I sure did ;-)), about systems, about security, and about yourself.
Adolfo, Danium and Regine at Cyberforce