[{"content":" Workflow - Fra Idé til AI-Vurderingsværktøj # Kildematerialet # Jeg startede med tre markdown-filer fra uddannelsen — læringsmål, krav til rapporten og EK\u0026rsquo;s Dare-Share-Care-koncept. Jeg uploadede dem til Claude og bad den om at omsætte dem til en rubric med kriterier, vægte og niveaubeskrivelser. Claude læste materialet, identificerede de relevante krav og strukturerede det til syv kriterier.\nFørste version # Claude byggede en simpel FastAPI-backend med ét endpoint /grade der tog en rapporttekst, sendte den til Claude API\u0026rsquo;et og returnerede en JSON-vurdering. Det virkede — men var skrøbeligt. Ét API-kald, ingen fejlhåndtering, og modellen returnerede af og til JSON med engelske feltnavne eller score som string i stedet for tal.\nReliability-forbedringer # Jeg besluttede at bygge det ordentligt. Claude tilføjede tre nye moduler:\nSelf-consistency — i stedet for ét enkelt kald kører systemet nu tre parallelle kald og tager median-scoren. Hvis kaldene er uenige om niveauet på et kriterium, flagges det med et advarselsikon i frontend\u0026rsquo;en.\nSchema-reparation — engelske niveauer som high og medium mappes automatisk til høj og middel. Score som string koerces til float. Manglende kriterier udfyldes med ukendt frem for at crashe.\nRetry med exponential backoff — transiente API-fejl håndteres automatisk med op til tre forsøg og stigende ventetid mellem dem.\nFeatures # 7 vurderingskriterier udledt af læringsmål, rapportkrav og Dare-Share-Care Struktureret JSON-output med niveau, score, begrundelse og evidens pr. kriterium Forbedringsforslag og dialogspørgsmål til den mundtlige eksamen Tegnoptælling i kode — ikke i modellen — så omfangsvurderingen er præcis Prompt-injection forsvar så rapportens indhold ikke kan manipulere modellen Audit-log der gemmer alle vurderinger med request-id og prompt-hash Test # Vi testede systemet på tre rigtige praktikrapporter. Resultaterne:\nStuderende Niveau Score Student 1 — Cloud Operations høj 81 Student 2 — Ski-rejse startup middel 67 Student 3 — AI Research middel 56 Rubricen kunne differentierer de tre rapporter er faktisk forskellige i karakter, og scorerne afspejler det.\n🔧 Ting der skulle fikses # API-nøglen forsvandt hver gang terminalen blev lukket fordi set kun sætter variabler for den aktuelle session. Løsningen var en .env-fil med python-dotenv så nøglen automatisk loades når backend starter.\nCORS-fejl opstod da frontend og backend kørte på forskellige porte. Løsningen var at lade FastAPI serve index.html direkte via /ui-endpointet så alt kørte på samme port uden browser-restriktioner.\n500-fejl kom af at API-nøglen ikke var sat i miljøet. Terminalen viste præcis fejlbeskeden — ANTHROPIC_API_KEY ikke sat — så det var hurtigt at spore.\nAlt i alt gode læringspunkter. 😄\nHvad jeg lærte # Om LLM-integration: En sprogmodel i en softwareapplikation er ikke bare et API-kald. Det handler om promptdesign, fejlhåndtering, schema-validering og at tage stilling til hvornår modellen er usikker. Self-consistency og varians-rapportering var den vigtigste enkeltforbedring.\nOm vibe coding: AI er et fantastisk værktøj, men det er ikke magi. Man skal stadig forstå hvad man vil, kommunikere klart og debugge når noget går galt. Import-fejlene lærte mig mere om Python\u0026rsquo;s module system end timer af tutorials ville have gjort.\nOm at bygge projekter: Rubricen er det sværeste — ikke koden. At oversætte vage krav til præcise niveaubeskrivelser med konkrete eksempler er det arbejde der afgør om vurderingerne er brugbare eller generiske.\nAI\u0026rsquo;s fejl # AI er ikke perfekt. Der var imports med forkert sti, en model der som default ikke eksisterede, og outputs der virkede lokalt men ikke i produktion. Det er vigtigt at se at man ikke kan stole 100% på AI — men det er stadig et enormt kraftfuldt værktøj at kode med. Man skal bare have styr på hvad man beder om og verificere at det rent faktisk virker.\nPrøv det selv # API\u0026rsquo;et er live med Swagger UI hvor du kan indsætte en rapport og få en vurdering tilbage på få sekunder. Koden ligger på GitHub med fuld dokumentation og 19 automatiserede tests. Man skal selv lave en env fil og sætte en api nøgle ind for at få det til at fungere.\nlink til github # https://github.com/FredeBas/Ai-Grader\n","date":"27 April 2026","externalUrl":null,"permalink":"/Portfolio/blog/ai_grading/","section":"Blogs","summary":"","title":"Ai-Grading","type":"blog"},{"content":"","date":"27 April 2026","externalUrl":null,"permalink":"/Portfolio/blog/","section":"Blogs","summary":"","title":"Blogs","type":"blog"},{"content":"","date":"27 April 2026","externalUrl":null,"permalink":"/Portfolio/","section":"Portfolio","summary":"","title":"Portfolio","type":"page"},{"content":" 🤖 What is Vibe Coding? # Vibe coding is a development method where you describe what you want to an AI, and the AI writes the code. You are the architect and director - the AI is the developer. You don\u0026rsquo;t necessarily need to know all the technical details. You just need to know what you want and communicate it clearly.\nIn this case I used Claude from Anthropic as my AI partner. The entire project - from the first line of code to deployment - was built in a single conversation with Claude.\n📋 Workflow - From Idea to Live Website # Step 1: Photos of the Quiz Sheets # I photographed my quiz sheets from the meditation course and uploaded them to Claude. Claude read the questions, identified the correct answers and understood the structure straight away.\nStep 2: First Version # Claude built a first version of the quiz as a simple HTML page. It worked locally but had issues loading files correctly via Live Server in VS Code.\nStep 3: Migrating to React + Vite # We decided to build it properly with React and Vite - a modern JavaScript framework and build tool. Claude generated all the necessary files and set up the project from scratch.\nStep 4: Features # 21 questions across 5 categories Instant feedback with jokes for each answer Live score tracking and progress bar Animations and modern UI with gradient design Results page with personalised feedback Step 5: Deployment with GitHub Actions # We set up GitHub Actions to automatically build and deploy the project to GitHub Pages every time new code is pushed to GitHub. That means I never have to think about deployment - it happens completely automatically.\nStep 6: Live! 🎉 # After all the fixes the quiz was live on GitHub Pages - hosted for free and accessible to everyone.\n🔧 Things That Needed Fixing # No project goes completely smoothly - and this one was no exception. Here are the problems we ran into along the way:\nThe white page was the biggest challenge. After deployment the page showed nothing - just white. It turned out to be three separate problems that all needed fixing.\nFirst, the deployment file was in the wrong place. It needed to be in a specific folder for GitHub Actions to find it - something we had overlooked from the start.\nThen there was a problem with the base path in the configuration. GitHub Pages requires you to specify exactly which path the website lives at - and it has to match the repo name on GitHub exactly, including uppercase and lowercase letters.\nFinally there was a problem with how the CSS framework Tailwind was being loaded. We used a quick CDN solution that works locally but not in a proper production build. We had to install Tailwind correctly as part of the project instead.\nPermissions errors in GitHub Actions meant the bot didn\u0026rsquo;t have the rights to write to our repo. This was fixed by enabling \u0026ldquo;Read and write permissions\u0026rdquo; in the GitHub settings.\nAll in all these were good learning points - and the kind of mistakes you only make once. 😄\n😄 My Favourite Feature: Jokes With Every Answer # The most fun part of the project is the feedback system. For every question you get a joke explaining why your answer is right or wrong. For example:\nQuestion: The goal of meditation is to be completely empty of thoughts\nWrong answer: \u0026ldquo;Nope! Our brains are like thinking machines - that\u0026rsquo;s their job. The task is to be still around the thoughts, not to make them disappear. 🧠\u0026rdquo;\nQuestion: You can only meditate properly if you sit in the lotus position\nWrong answer: \u0026ldquo;Oh no! You can be inwardly still anywhere - even while doing the dishes! (Although that would make for an unusually peaceful kitchen) 🧖‍♀️\u0026rdquo;\n🎓 What I Learned # About meditation: It\u0026rsquo;s not about removing thoughts. It\u0026rsquo;s about changing your relationship to them. You can have a whole festival of thoughts in your head and still be completely still - because stillness is an inner position, not an outer state.\nAbout vibe coding: AI is a fantastic tool, but it\u0026rsquo;s not magic. You still need to understand what you want, communicate clearly and debug when things go wrong. The white page taught me more about deployment than hours of tutorials ever would have.\nAbout building projects: The idea is the hardest part. Once you have it, the path from idea to finished product is shorter than ever thanks to AI.\n❌ Ai mistakes # Ai is not perfect and as you can see in this project it\u0026rsquo;s not the perfect danish language it is translated to. Perhaps it could be my prompts that could have made it better. It\u0026rsquo;s important to see that you can\u0026rsquo;t 100% trust ai but its still a powerful tool to code with but you gotta have some thoughts about what you prompt it with. I startet this project with 2 lines of prompt to see what would happen. It was awful.. it looked like someone playing with html for the first time, but after a more narrow prompt with much more details about what you want it gives you alot more.\n🚀 Try It Yourself # The quiz is live and free to use: 👉 https://fredebas.github.io/Meditation-Quiz/\nA Thank You to Claude 🤖 # This project wouldn\u0026rsquo;t exist without Claude. Not because I can\u0026rsquo;t code - but because vibe coding with AI makes it possible to go from idea to finished product in a fraction of the time it would normally take.\nIt\u0026rsquo;s not about AI replacing developers. It\u0026rsquo;s about AI giving you superpowers to build things faster, learn along the way and focus on the idea rather than the syntax.\n\u0026ldquo;Vibe coding isn\u0026rsquo;t the future - it\u0026rsquo;s the present.\u0026rdquo;\n","date":"24 April 2026","externalUrl":null,"permalink":"/Portfolio/blog/coding_agent/","section":"Blogs","summary":"","title":"Building a meditation app with react and an Ai-agent","type":"blog"},{"content":" RAG Chatbot Project Overview # Project Purpose # I built a RAG-based chatbot as part of my course in AI-driven application development. The goal was to create an intelligent assistant that can provide tailored answers based on selected sources rather than relying solely on the underlying model\u0026rsquo;s general knowledge.\nRAG stands for Retrieval Augmented Generation – an architecture where the chatbot first finds relevant material, retrieves it, and then uses it to formulate precise answers.\nCore Functionality # The chatbot enables users to ask questions about uploaded material or documentation.\nInstead of generating responses purely from language intelligence, the system operates as follows:\nreceives a question from the user searches through text segments in the submitted material identifies the most relevant passages uses these as the foundation for generating a response The result is a chatbot that excels when working with documents, reference material, or domain-specific content. Implementation Process # I followed a classic RAG workflow:\nStarted with a collection of documents or text Divided the material into smaller, manageable chunks Converted these chunks into numerical representations (embeddings) Stored embeddings in a searchable database When a user asks a question, the system retrieves the most relevant passages These passages are sent along with the question to the language model The model generates an answer based on the retrieved context The core principle was connecting my own material to an LLM in a structured and reproducible manner. Advantages of the RAG Approach # Through this project, I discovered several important benefits of RAG:\nTargeted relevance – answers focus on the chosen topic\nFewer hallucinations – the model is grounded in actual sources\nFlexibility – the system can work with custom documents\nSpecialization – enables domain-specific assistants At the same time, there are also important limitations to consider:\nanswer quality depends directly on the quality of the source material\nthe choice of chunking strategy and retrieval method is critical\nif incorrect context is retrieved, the answer becomes weaker\nRAG is not always the right solution for every problem type\nKey Learnings # This project taught me the following:\nhow RAG architecture works in practice the role of embeddings in text representation and search how retrieval connects language models to external knowledge the difference between standard chatbots and RAG-based systems the importance of data quality and structure in AI systems A crucial insight was that AI development isn\u0026rsquo;t just about the model itself, but about the entire pipeline surrounding it. Technology Stack # Large Language Model (LLM) Retrieval Augmented Generation (RAG) Embedding generation Document chunking Semantic search and retrieval Final Thoughts # This project gave me practical experience with how modern AI assistants can be connected to real data and customized sources.\nThe most eye-opening aspect was seeing how a chatbot becomes significantly more useful when it can answer from carefully selected sources instead of only general knowledge. This made the project feel much more realistic and closer to solutions encountered in industry applications.\nIssues i had with Rag chatbot # I had problems implementing the automation tool for the chatbot. I wanted to set it up on github action to automatically get documents that is pushed. It got a little complicated and i almost lost a computer screen because i couldn\u0026rsquo;t get it to work😂 That didn\u0026rsquo;t happen because i found out why it didn\u0026rsquo;t work and it was my own mistake not going through the guide 2 times instead of 1, i had put at wrong variable in my actions on github\n","date":"17 April 2026","externalUrl":null,"permalink":"/Portfolio/blog/rag/","section":"Blogs","summary":"","title":"Rag Chatbot","type":"blog"},{"content":"Beskrivelse af peli projektet kommer her.\n","date":"12 April 2026","externalUrl":null,"permalink":"/Portfolio/projects/peli/","section":"Projects","summary":"","title":"peli","type":"projects"},{"content":"","date":"12 April 2026","externalUrl":null,"permalink":"/Portfolio/projects/","section":"Projects","summary":"","title":"Projects","type":"projects"},{"content":" Different type of project # I really liked the first day of ai that our teachers wanted us to implement a portfolio with all of our projects. It is a very different approach than what we were used to.\nWhat i like about it is that u can easily share with others now the process and the end product.\nLooking forward to see where my portfolio goes.\nIt was frustrating ai\n","date":"10 April 2026","externalUrl":null,"permalink":"/Portfolio/blog/first_day/","section":"Blogs","summary":"","title":"First day on ai","type":"blog"},{"content":"","externalUrl":null,"permalink":"/Portfolio/authors/","section":"Authors","summary":"","title":"Authors","type":"authors"},{"content":"","externalUrl":null,"permalink":"/Portfolio/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":"","externalUrl":null,"permalink":"/Portfolio/series/","section":"Series","summary":"","title":"Series","type":"series"},{"content":"","externalUrl":null,"permalink":"/Portfolio/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"}]