Role
UX Intern @ Design Brewery
Year
2023
May - June (4 weeks)
5 Designers + Researchers
UX Audit, Funnel Analysis, Heuristic Analysis, User Flow, UI Redesign
TL;DR
What is wework, theirofferings, the problem and what was fixed
Objectives
The project kicked off with a crucial two-stage approach. In the first stage, I was entrusted with another UX designer to quickly assess the existing design. Our goal was to identify immediate pain points and develop a concept sample design that we could present to the WeWork team. The second phase involved improving the booking flow, and finally creating device agnostic design for the product.
Phase 1: Initial Audit & Research
A Puzzling Drop-off
92% of the traffic came from mobile, yet the conversion rate was 0.7%
Heuristic Evaluation Findings
We conducted a systematic heuristic evaluation of the existing interface, focusing on the first two steps where most users dropped off.
Creating the Sample Solution
Rather than just presenting problems, we developed a prototype demonstrating potential improvements.

Phase 2: Deep Research
Detailed Funnel Analysis by Service Type
With WeWork's approval, we dove deeper into understanding user behavior across their three main offerings. The data revealed distinct patterns.
We started by segmenting the funnel data by service type.
Day Pass showed 699,000 sessions converting to just 6,100 purchases.
Conference Rooms performed better proportionally, with 56,000 sessions leading to 2,000 bookings.
But the real surprise was Day Pass Bundles—despite being the most profitable offering for WeWork, it attracted only 3,700 sessions with 1,400 conversions. The bundles had a better conversion rate but suffered from low awareness.
The abandonment patterns were consistent across all three offerings. Between 45% and 85% of users dropped off at four critical points: landing on the page, viewing the list of buildings, selecting a specific space, and adding billing details. Mobile showed 70% to 85% abandonment rates compared to desktop across all these steps
Competitor Analysis
Our competitor analysis revealed fascinating contrasts in approach.
MyHQ had implemented quick filtering directly within listing pages, reducing time to purchase.
Workafella used clean, minimal designs optimized for mobile loading speeds.
Regus' checkout flow wasn't particularly innovative but their listing page wasefficient. Filters and sorting options were prominently displayed, and the information hierarchy perfectly matched user priorities.
Social Media
Unclear Product Offerings: WeWork's On-Demand ads didn't specify the different types of flexible workspaces available, such as day passes, bundles, or conference rooms, which confused new users.
Vague Social Media Messaging: WeWork India's social media used too many aspirational adjectives like "inspiring" and "collaborative" without providing concrete details or benefits, unlike their competitors.
Mixed Messaging in Video Ads: Their YouTube video ads tried to appeal to both individual freelancers and large enterprise teams simultaneously, which diluted their message and confused their target audience.
Word Cloud Analysis: A word cloud analysis showed WeWork's social media content lacked keywords related to amenities, while competitors like Workafella had a better balance of descriptive adjectives and service-oriented words.

User Research and Personas
Using the data we were provided we identified the top three client industries: Technology (obviously), Consulting and Creative Services. Individual Contributors and management-level roles were equal in distribution.234w1
Most surprisingly, the majority of On-Demand clients either owned or worked for incorporated businesses. These weren't just freelancers looking for WiFi and coffee. They were serious professionals who needed flexible but professional workspaces.



We developed four personas based on this data:
The freelancer booking spaces while commuting.
The startup founder needing impressive meeting rooms.
The team lead coordinating group bookings.
The digital nomad prioritizing location and amenities.
System Usability Scale Testing
We conducted SUS testing with representative users to quantify the usability issues

To quantify the website's usability issues, we conducted a System Usability Scale (SUS) test with 10 representative users at a nearby WeWork workspace.
The website achieved a final SUS score of
55.25, which is rated as "Below Average". This score is well below the industry average of 68 and falls into the "Marginal" acceptability range, suggesting that users are likely to have a negative experience with the site.
This quantitative data supported our qualitative findings. During the test, users consistently rated the system as unnecessarily complex and noted that there was too much inconsistency throughout the booking process. This test helped us set the focus on areas of improvement in the booking flow.
Booking Flow Analysis Deep Dive
The existing booking flow required users to make too many decisions too early.

Our detailed booking flow analysis revealed structural problems beyond the interface issues. The "How it works" link on the homepage took users to a completely different website section, breaking their mental model of the booking flow. The "Call us" CTA, meant to be helpful, took the user out of the booking flow with no way to return.
The existing workflow required users to make too many decisions too early. Before seeing any actual workspaces, users had to choose between Day Pass, Bundles, or Conference Rooms—distinctions that weren't always clear. Then they selected a city, then a specific location, all before seeing if spaces were even available.
We mapped out the current user journey and identified multiple points where unnecessary complexity created friction. The system asked for team member details individually rather than just the number of seats needed. The date selection came after space selection, meaning users might fall in love with a space only to find it unavailable on their desired date.
Phase 3: The Redesign
New booking workflow
With research complete and insights synthesized, we began the actual redesign.
The new workflow compressed eight steps into four intuitive stages. First, the system auto-detected user location and immediately showed available workspaces nearby. Users could adjust the location if needed, but smart defaults eliminated friction for the majority. Second, users browsed and selected spaces with rich visual previews and key information visible at a glance. Third, they specified their needs, dates, number of seats, and duration with intelligent suggestions based on availability. Finally, a streamlined payment process with saved preferences and one-click options for returning users.

Mobile-First Design Approach
Frontend Architecture
More decisions explained


Frontend Architecture
More decisions explained


Frontend Architecture
More decisions explained


Frontend Architecture
More decisions explained


Frontend Architecture
More decisions explained


Results
The interface is rendered in React, using GridStack.js to manage layout, resizing, and drag-and-drop interactions. The renderer interprets the JSON specification coming from the orchestration layer and maps each component definition to a corresponding widget. GridStack’s persistence features, paired with a Supabase backend, allow widget positions and configurations to be stored between sessions. This means that even though Genie dynamically generates new widgets on demand, users retain control over how those widgets live on their screen.


Takeaway
The interface is rendered in React, using GridStack.js to manage layout, resizing, and drag-and-drop interactions. The renderer interprets the JSON specification coming from the orchestration layer and maps each component definition to a corresponding widget. GridStack’s persistence features, paired with a Supabase backend, allow widget positions and configurations to be stored between sessions. This means that even though Genie dynamically generates new widgets on demand, users retain control over how those widgets live on their screen.