Design Technologist
Research, UX, prototyping, and engineering — practiced across the full stack, from earliest concept to shipped product.
Selected Work
In 2020, built a completely original tool that mapped Amazon's Figma design system to its React component library — click any component, get exact production code and accessibility props. Figma shipped their own version of the concept in 2024. They called it Code Connect.
Led a cross-functional redesign of Amazon's employee HR chatbot — introduced the first structured CSAT measurement the product ever had, landing at 4.65/5.
Led front-end development for a hybrid React Native / React app serving Amazon's global employee base — production TypeScript, GraphQL, i18n, CI/CD, and full accessibility compliance.
Senior Front-End Developer on Apple's product launch team — built the iMac, iMac Pro, Mac Pro, MacBook Pro, MacBook Air, Mac Mini, Safari, and Newsroom pages on one of the highest-trafficked sites in the world. The App Store page still runs the same design today.
Built the software and hardware system behind an ML-powered dress worn at the Met Gala — Watson sentiment data translated into real-time LED color, live on the red carpet. Now in the Henry Ford Museum.
Built a real-time WebSocket network across 5 apps and custom Arduino hardware, embedded in a Lincoln Continental and shipped to Shanghai for user testing. Six weeks from kickoff to demo.
Capabilities
AI Thread
Conceived and built a celebrity lookalike widget using ML image classification and facial recognition — matching users' Facebook friends to NBC talent, deployed in a second-screen mobile experience.
Collaborated to concept and build the IBM Cognitive Dress — Watson text classification translating social media sentiment into real-time LED color on a dress worn at the Met Gala. Now in the Henry Ford Museum.
Built an early Alexa voice application for a Nespresso pitch — voice control wired to a hacked espresso machine. An early exploration of conversational AI and connected hardware.
Deployed ML text classification models via Amazon Comprehend at 90%+ accuracy to evaluate UX writing quality at scale. Embedded in Figma plugins and web apps. Trained UX writers to build and run their own — an early human-in-the-loop AI tooling deployment inside a large org.
Building Figma MCP integrations and custom Claude skills for design workflows — tooling that lets AI reason directly about design files, generate annotations, surface component code, and close the gap between design intent and production output.
About
I'm a design technologist with over a decade at the intersection of UX and engineering — most recently at Amazon, before that at Apple, Smart Design, and Ogilvy. I studied at ITP at NYU, which is where I first understood that the most interesting problems don't belong entirely to any one discipline.
I've been building with AI since 2012 — facial recognition at NBC, Watson-powered wearables at Ogilvy, NLP classifiers at Amazon. I took time away from the industry to pursue something completely different, and I came back because AI has made software development feel genuinely exciting again.
The tools available now — Claude Code, Figma MCP, spec-based development — are changing what it means to work at the design/engineering boundary. I want to be building at that frontier.
Contact
Available for product design, design systems, and design-engineering work. I work well in small teams, ambiguous problem spaces, and places where "that's not my job" isn't a useful phrase.
hello@joshuaberry.com ↗ LinkedIn ↗ GitHub ↗ Resume (PDF) ↓