Project Modai: Process thread [FIN.]

Latest update: PROJECT MODAI COMPLETE (for now).
http://boards.core77.com/viewtopic.php?f=20&t=23096

===

Hi all!

My name is Julius, I am a senior ID + HCI major at Carnegie Mellon.
My fall senior project/thesis is an exploration of future mobile devices titled Project Modai.
You may have seen/participated in the topic I posted on core discussing our relationships with our mobile devices.
Screen shot 2010-10-07 at 11.38.49 AM.png
I just wrapped up the research phase with a problem framing video to pitch our project. You can view it here:
Watch on Vimeo: Project Modai Introduction on Vimeo
Screen shot 2010-10-10 at 1.08.36 AM.png
(The [vimeo] tag doesnt work :[)

I plan to use this thread, as well as my Tumblr (http://projectmodai.tarng.com) which has Disqus comments enabled, to show my design process and get community feedback.

===

My whole project revolves around the idea of melding ID, UI, and brand. I feel like those are often disjointed in products we see today. How can elements in the UI break out of the screen into the physical world? How can the materials and visual design of the UI feed into the brand? What experiences can be branded?

Another point I want to address is how to make inanimate objects like a device more human and emotional. A lot of us are very dependent on our mobile devices. I want to explore how I can support that relationship with design elements and interactions. What materials will make the device more personal? Feel more comfortable to use? More reliable? Patina?

The final point in my project is sustainability. Not in using recyclable materials, but in supporting technological advances. How can I develop a product that can have its technology upgraded, but have the experience be timeless?

All of those fit snugly with each other. The sustainability is there to foster the emotional connection between the device and the user.

===

Please drop by here and my Tumblr every once in a while to see what I’m up to! I will be posting frequently with my progress.

Thanks!

Logo development from the past few weeks. Ended up back where I started. Debating the gradient since it really reminds me of the Zune logo.


I bounced around a lot going from classic script logos to more aggressive, angular logos. I finally decided to go with my initial concept of the subway-esque path that has all the letters of Modai within it (see top right). It’s modern and hints at connectedness.

Just a quick reaction. The M reads as a graph with the current status as declining.

I agree with nxakt. Right now it looks like a brand mark developed by an industrial designer… which is not a good thing :wink:

I’d recommend you take a step back and put together a list of words you would like the mark to represent, and compile an image board of contemporary marks in the zone you want to be in. Also, remember your focus, the brand mark/campaign could be a semester long project in itself. Instead you want to focus on the design of the device itself, so I would quickly benchmark some great existing brand marks, and create something. In the end, it shoudl just be a background graphic, or a tiny mark on the screen, not an emphasis point.

Some resources:

http://www.graphicdesignblog.org/hidden-logos-in-graphic-designing/

I would suggest doing some research on making the interface more tangible, intuitive.

yesterday products had a readable interface. A blender blends and the disc on the front rotates so you can change the mode or speed. A vacuum-cleaner vacuums and the button on it is to active it or stop it…Nowadays mobile devices can do so much that the product isn’t readable at all. Elderly people do not know how to function these kind of devices and they are intimidated by them.
Even I am…and I’m only 30years old :wink:
Other things to be considered:
life expectancy: Mostly these devices get thrown away pretty quickly because of the rapid change of technology > Cardboard? Upgradable devices? Could you implement a tamagotchi effect so people will feel bad if they throw them away. Recycling?
Energy usage? Do we really need a full-screen 16million color display? Charging? Tactility or the lack of it in current devices.


Here’s some inspiration links:

And my answer to the tactile-less interface:

Oh and why are you allready designing a logo if you haven’t got a product or design-vision/strategy/guidelines?
Focus on what’s important. What are the real problems? and fix them in a million possible ways.
Then design a product or make a list of possible guidelines to fix the problems.

Good luck :wink:

Thanks for the feedback on the logo. I knew it was a huge problem.

nxakt: Damnit. :[

yo: I did come up with a bunch of words and I did look at a bunch of marks in the zone I want to be in (they’re in the first process page). I didn’t like any of the stuff I came up with. I’ll push back on that and make that process a little more explicit.

Atoms: I did frame my problem already, and your suggestions are in the introduction video (did you watch it? there’s a link to it in the first post: Project Modai Introduction on Vimeo). I know what I’m designing.

Regarding your suggestion of making interfaces more tangible and intuitive: that is one of my core focuses. I want to merge UI and ID. How do interactions breach the wall between digital and physical. I have a couple of ideas in my mind already that I’ll need to process a bit (such as how notifications go into the physical world, context aware things, etc).

Although I do agree that elderly are intimidated by mobile devices in general, that is not my target/focus for this project. I’ve been working on a persona, and I’m designing for a young Millenial who’s a freshman in college. I’ll have more on that later.

One of my other core focuses is “life expectancy”, so yes I am looking at upgradability. In terms of energy usage, I am not an engineer nor do I have the resources to even address these things. I do know that the amount of color you display has negligible effect on battery life. The primary factors in battery performance are: brightness of screen, vibrations, WiFi/3g, making phone calls, graphic/CPU intensive things (games). Those things are unavoidable. I am looking into a recent invention of a paper-thin battery by Stanford researchers.

Well, maybe just sit on it for a bit. The process is never as linear as we make it seem. Maybe something later on will inspire the logo mark. Also, I typicLly constrain myself to a single color and black and white on these. Typically that is all the best marks ever have.

Thanks for the tip. The biggest problem I’m encountering right now is the process of the whole project. I’m used to either a UI centric HCI approach or an ID centric approach. While both are similar, I’ve never had to juggle both at the same time. It’s proving to be a challenge that’s exciting and frustrating at the same time.

I learned a few tips for doing a UI + ID project from Device Design Day, but it’s still hard to implement.

I’d say the hardest part is doing it alone! I’m working on several device(s) +UI +tech ecosystem +branding projects. It takes a talented and experienced team to keep it all straight!

Perhaps order out things by priority just so you can focus? What will help you to get the first job you want? If you show a lot of UI and IXD that is where employers will see you. A great thing, IF that is what you want.

I did try to watch the video but it was very choppy and unwatchable. Sorry if you had allready paid attention to some of my remarks.
Good luck with the project. Quite interested in how this will unfold.

Refocusing the project

Brainstorming session during class yesterday revealed that the scope of my project was way too broad. I won’t be able to get the depth I need to make a compelling project if I don’t narrow down.

Original list:
This is what I wanted to do before (basically what the problem framing video says)

  • -Mobile device industrial design (sustainable, upgradable)
    -UI visual design
    -Navigation within mobile device interface (home screen, etc)
    -Organizer app
    -Camera app (and how that influences the industrial design)
    -Social integration
    -Integration with other devices
    -Humanizing the phone (how does it behave contextually, when you get a call, when you have notifications, when it needs to be charged)
    -Branding (timeless, human, integrated)

It’s difficult to process the fact that I simply won’t have time to do all of these things (well). Some feedback I got from my video also questioned the purpose of the project. Why am I exploring this? What’s the gaping problem in mobile devices?

At this point I had a few things that I was interested in exploring: Humanizing technology, the organizer, sustainability, and the camera function. Social integration has been done to death and there’s no major problem with the way it exists now (nothing that would need to be in a senior thesis). Branding is inherent, as well as visual design.

===

The new focus

There is no real emotional connection past the functional (necessity of the device) or the aesthetic (the iPhone is beautiful). Mobile devices today have no personality. I want to explore how the digital and physical manifestations of the mobile device can become more human and engaging and create a timeless relationship between the user and device.

  • -Industrial design (form factor that suits daily use/different contexts? sustainable, upgradable, customizable? To help facilitate the timeless relationship)
    -UI interactions (make it more human, interactive, breach the wall between digital and physical. how can the system evolve and learn as you use it? how does it act when it needs to be charged? when you have an alarm?)
    -An example of how an application would reach into this human space: organizer. (how can reminders be more physical? how can you nag without being annoying? how do you handle thing without due dates? missed events? how does it replace mom?)

I will explore these 3 things, while addressing bits of functionality here and there in order to inform my design. I’ll be looking at the contexts of use from my research, alluding to other functionality that I won’t be focusing on (docking, kickstand, camera, etc).

===

The brand

I presented the working logo to Core77 and received some informative feedback. The logo at this point is too graphical and implies other things (like a falling stock market graph). It also doesn’t really reflect my goals with this project. It looks more robotic and autonomous rather than human and emotional.

In order to address this, I will be re-looking at the visual design process to approach this from a more “human” perspective. I pulled together the following mood board of the logo and brand. I chose logos that reflected both a vintage timeless feel, as well as a more human script feel. At the same time, one or two of the logos have a touch of modern (bottom right) with the use of color and minimalist form.

===

I will be in Toronto this weekend visiting design firms and meeting up with alumni and some folks from Core. Hopefully I’ll come back refreshed and ready to tackle Modai v2.

Hey Julius

I’ve just been looking through all your stuff on your new portfolio site (which looks ace) Just a couple of idea’s i thought i’d suggest really.

you’re focusing on making sure the interaction between user and mobile is seamless and intuitive with modai - Have you thought about the interactions that take place after the user/device have been taken into account. I’ve recently been looking at ‘The internet of things’ topic (first championed by mark weiser ages ago) and we’re more and more seeing the mobile device used as an “enabler”. A transportable screen which lets you interact with the environment around you.

Another thing that comes out of this is the sustainability issue, if you have a screen with you at all times, why does there need to be a screen included in, say, your washing machine or microwave? So The interactions between your device and objects around it become much more interesting in that way. - just another way of thinking about the idea of sustainibility.

I’ve read through a lot of your project blog and i think it would be interesting to take a look at what else the phone might interact with in the future.

I think its a great project by the way, really interesting. I did a project focused on similar themes for my grad project (Sustaining empathy in throwaway electronics) but it definitely wasn’t as detailed as what you’re undergoing. I’m going to be watching closely.

actually, just noticed that ‘atohms’ had posted pretty much the same thing before me…

Hi Julius,

This project looks really interesting,
I’ve been working at Kinneir Dufort in bristol, UK for the past 2 years, thanks very much for commenting on the REVIVE project, I hope it has helped inspire you!
I’ve been developing the UI for the device and am interested in enhancing the emotional connection between users and their products. Check out the video on this page that focuses on this aspect :

http://www.dshott.co.uk/revive_full.html

I’ll certainly try to follow your project and I’ll let my creative director know about it too.
We’re developing REVIVE when we have spare moments and hopefully with some client involvement soon, so look out for updates!
Following you from my ( twitter.com/_dshott ) account and will try and support you through the company account tomorrow

PS: I was trying to find Fabian Hemmert’s (I’d forgotten his name) “breathing phone” concepts which I thought might help, but I see on your blog you’ve found it already (pretty raw, but totally awesome right!)

More info/images on Kinneir Dufort’s REVIVE project:
REVIVE project - kinneir dufort
REVIVE on designboom
REVIVE on mashable

(>_<)
Duncan Shotton
dshott.co.uk

sdcrosla: Thanks for the advice! Yes, I had listed that as one of my goals at the beginning, but I realize now that I only have 2 months to finish this project, so I’ve decided to narrow it down to focus on the human aspect.

Duncan: Yes! I saw Revive and I was really intrigued by your take on a sustainable and human device! It was a great concept and was confirmation that I was on the right track. Thanks for the links and advice!

===

Progress
You can view this post in its natural habitat here: http://blog.tarng.com/post/1353679145/ive-found-it-the-ui-paradigm

So I’ve made some major progress in the past week. I just got back from Toronto, but I’ve come up with some key ideas for modai.

As I said previously, Project Modai now has a new, narrower focus. It is an exploration of the connection between a user and his device (or making mobile devices more human). I will do so via exploring status (anthropomorphism, feedforward/back, alerts, notifications), a growing/learning UI, and a sustainable/upgradable/customizable device. I may also explore the element of fun (thanks to this article by Method exploring what video games teach us about motivating people: Method: 4 Things Video Games Teach Us About Motivating People).
In this post I will go through my persona and the processes I used to find the UI paradigm mentioned in the title.


Persona

A persona is useful for me to keep in mind the aspirations, personality, and habits of my target user while I’m designing. I chose more of an extreme user, so that my designs will be able to accomodate a wide range of people.

Hunter Lee
18 year old
Incoming college freshman - Architecture

Aspires to be a world famous architect
Living on own, wants to be in control of managing finances, work, social
Lots of student loans (not trying to spend money buying a new phone every year)

Socially active, pursuing girl (constant social media)
Lots of group work, activities, meetings


Brainstorming

I started with a mindmap. This mindmap was a free-for-all of ideas and connections. As I drew out bubbles and connected them to each other, I realized that everything goes back to the aspect of humanizing.

I was also able to drill down into specific categories:
1. Humanizing the expression of status.
Humanizing status expressions requires an in-depth look at feedforward and feedback. There are two categories:

Active
Notifications (alarms, organizer reminders, to-do)
Calls, texts
Lockdown mode (more on this later)

Passive
Battery low
Task list/time (how well you’re managing)
Volume
Signals (WiFi, data)

2. Learning and growing device
Learning your habits
What you use throughout the day/in different locations/what is important (changing the homescreen UI to show what you really need)
Are you always late to meetings? Modai will remind you earlier next time (geolocation tracking if you’re at your meeting spot)

Awesome idea: Switching paradigms
As I was thinking about learning and growing, I came up with an idea of switching paradigms. A student needs to balance his social and work life. The UI is split into two facades, and relevant items would only appear on the respective one.


Paradigms

I really liked the concept of switching paradigms, so I had brainstorm of what each facade would have.

Fun facade
The fun facade holds all the information and applications that Hunter needs when he’s in his social mode. Common items would be:

Social media
Texts and calls from known friends
Social photos (to FB, Twitter)
Entertainment (blogs/rss, music, movies, games)
Organizer (social tasks)

Professional facade
This facade holds information relevant to Hunter’s work, such as:

Organizer (see urgent/upcoming, quick add)
Time management (if you’re currently doing a task, how much time you have)
Document work (photograph, voice record, notes)
Communication (work related phone calls, emails, group members)
Share (projecting presentations)

Both
Time
Weather
Status information (battery, signals)

Interaction between the two
This is where things get fun. I’m not entirely sure how you would switch between the two paradigms (flipping the phone, a switch, context-aware), but I thought of this neat concept:

In a normal Fun view, you don’t see any of the Professional facade. However, if you have a to-do that’s due soon, the Professional UI would start to creep into the Fun view, hinting at you to shift your paradigm to the Professional.


What’s next
I really want to know how people would like their Fun and Professional views organized. I will be conducting some velcro modeling with some classmates to get some different opinions. I will also be brainstorming ways you can switch between the two views.

One thing that I didn’t really think much about this time was the sustainable ID. I had a small idea where the physical buttons (vol, lock) would be movable to accomodate right and left handed users, and adds a level of personalization.

New blog update! See it in its natural habitat: http://blog.tarng.com/post/1364176931/paradigm-make-tool-today-i-used-this-make-tool

===

Paradigm make tool

Yesterday, I used this make tool in a brainstorming session with some other design students to generate ideas of what each paradigm (fun and professional) would contain, as well as the interactions within. A lot of good ideas surfaced, and over the next day I’ll be compiling the ideas we generated into a UI concept.

Insights
Context-aware switch (geolocation and time) with a manual option for the other occasions
Metaphor gestures. We thought of a Clark Kent → Superman shift of taking off your work clothes to go into your fun mode:

Frame widgets. Current iOS home screens are just full of app icons. Android widgets give much more information, but are pretty static. Remember iframes? Those, but on mobile home screens that let you scroll through your emails or updates without losing sight of your other frames. Example:

Relevant surfacing. Based on your widgets/frame content/what’s new, relevant apps would surface to the top. In the above case, your to-do list frame surfaces the apps that Modai thinks you need to complete the tasks.

Next time
There were a whole lot more, but that’s for next time. I’ll have a few wireframes (done in keynote, animated) to show.

I’ve also decided to use less wordy posts, with more visuals. It’ll take more work, but I think will be awesome. Look forward to it!

Changing the process: design through scenario
See this post in its natural habitat here: http://projectmodai.tarng.com

I’ve been having a hard time wireframing the UI because a mobile device UI has so many potential uses that it’s mind boggling to wireframe all its potential uses on my own.

Then I remembered that the whole point of Modai is to explore the future of mobile devices through a scenario based on the user Hunter Lee (read more about him here). Here is a rough outline of the scenario, titled A Day with Modai:

A Day with Modai
8:10 AM: Hunter’s alarm goes off. Modai tries to get him up with a different alarm than yesterday (to prevent him getting used to it). Hunter shakes Modai to snooze.

8:15 AM: Modai tries again to wake Hunter, this time with a louder, and more upbeat song. It also decides to throw in some vibration. Hunter shakes Modai harder to snooze.

8:20 AM: Modai has finally had it. It pulls out the big guns, and turns the volume and vibration to 11, and also turns on Hunter’s laptop and speakers. Hunter finally groggily wakes up and starts to interact with Modai to show it he’s awake (no turning off and falling back asleep here!)

8:21 AM: Hunter checks his email first, hopeful to see if his classes were canceled (although it never is). He switches Modai to Professional paradigm [Prodai] to see his work-related mails and updates. He quickly scrolls through the headlines/previews in the Mail frame, and to his dismay, there are no emails about class. He checks the weather so he knows what to wear. It’s sunny. He goes to get ready for class.

8:29 AM: Hunter is running late for class. While standing at the bus station, he pulls out Modai to check the time (without pressing anything). He unlocks the phone. Modai knows he’s at the bus stop via geolocation, so Modai Googles the bus schedule. It’s coming in 3 minutes. Hunter takes a seat and checks his Fun paradigm [Fundai] He then updates/tweets a status that he’s waiting for the bus.

8:36 AM: Hunter gets into morning studio. The professor is walking around. He places Modai on the desk. Modai knows his schedule and that he’s in class right now, it silences itself and turns off 3G and Wifi to save battery (metaphor: old antennas).

9:10 AM: The professor introduces a new group project. Hunter meets with his new group, and asks Modai to start a new project in the organizer. He enters in the group members and deadlines.

10:20 AM: Group meeting time. Hunter saw some architecture that was inspiring during the weekend that he took pictures of. He takes Modai and bumps it with his groupmate’s mobiles (also Modai?) to sync his photos.

11:15 AM: The group sets up an out of class meeting time. Hunter pulls out Modai and adds a new meeting in his organizer and syncs with his groupmates. They also assign tasks, so Hunter also enters a new task to do by tonight.

11:34 AM: Hunter is hungry. Modai’s learnt that he usually gets hungry on Thursdays around this time, so it surfaces the school’s dining Twitter when he pulls out his phone so he knows what’s on the menu. (or a menu app?)

11:47 AM: Hunter got his food and is eating in the cafeteria. He is by himself so he pulls out Modai and sets it up to read some RSS. He is happy and enjoying solitude.

11:49 AM: Hunter’s friends find him and join him to eat. Modai senses the other people and goes into its shy mode to not intrude on human interaction. (or maybe his friends come and Hunter wants to continue his solitude so Modai ambiently shows his desire to be alone)

12:52 PM: Hunter goes to studio and procrastinates by checking Facebook on his phone.

1:00 PM: Hunter has homework due at 1:30PM, and it’s still on his to-do list. Modai notices this and starts to warn Hunter (who’s still on [Fundai]) of the impending deadline. He immediately starts to work.

1:15 PM: Modai’s warning gets more intense.

1:25 PM: Hunter finally finishes and turns in the homework. He proceeds to mark the task as done, feeling great. Modai congratulates Hunter on a job well done.

1:27 PM: Hunter decides to start working on his group project. He checks the to-do list to see what he needs to do. He needs to so some sketches based off of his group’s research. Modai surfaces the email (or Gdoc) that has the group research. He starts working.

1:41 PM: Hunter gets a text from this girl he likes, Haley, while he’s working. Modai signals this physically without turning on the UI. Hunter notices and checks the text and responds to it. He wants to show his studio space so he takes a picture and sends it off.

2:11 PM: Modai’s running low on batteries. It signals Hunter that it needs juice. Hunter plugs him in.

2:10 PM: Modai alerts Hunter that he has a meeting at 2:30 PM. It’s learnt that Hunter takes 10 minutes to pack up at studio, so it warns him 20 minutes in advance for the 10 minute trip to the meeting.

2:31 PM: Hunter arrives at the meeting, Modai surfaces notes to take notes.

2:36 PM: Hunter receives a call during the meeting. Modai signals quietly that Hunter should pick up his phone, but it is not an urgent call. Hunter ignores the call.

4:30 PM: Hunter walks to the bus stop. He puts on his favorite music while he looks through Twitter.

4:50 PM: Hunter arrives at home. Modai turns on his lights. He goes to his fridge and uses Modai to check if anything needs to be replenished. Modai notices he’s low on eggs. He adds eggs to his shopping list.

5:10 PM: The mailman comes. It’s Hunter’s new hardware for Modai! Installs it, and mails the old parts back.

6:20 PM: Hunter gets an email from his groupmate. Since Modai knows Hunter is in a group, it notifies about the email even though it doesn’t with others. Hunter checks the email which asks Hunter to send them his sketches. Hunter photographs them as a part of the project, and Modai automagically sends them off.

8:10 PM: Hunter goes grocery shopping. Modai detects he’s in a store and shows his financial info. He’s close to his monthly limit. Hunter pays for the eggs he needs to buy with Modai.

8:30 PM: Hunter gets back and starts working. He knows it’ll be a long night, so he wants to manage his time better, so he asks Modai [Prodai] to manage what he’s spending his time on.

10:00 PM: Hunter’s been working for a while. Modai suggests a break.

1:30 AM: Hunter wraps up his work for the night. He reviews his time usage while working today. He sees that he’s spent at least 25% of his time tonight on Facebook and Twitter. Now he’s aware of this and vows to cut down.

1:45AM: Modai hints at Hunter to turn in for the night. If Hunter sleeps in 5 minutes he’ll be refreshed when he wakes at 7:50AM (sleeping cycles) for his 8:30 class (Modai increases the estimated time he’ll take to get to class based on earlier today). Modai knows Hunter won’t wake up if he stays up later. Hunter accepts the alarm set by Modai, and goes to sleep.

What’s next?
I will take this outline and start to storyboard it, deciding what screens of the UI need to be shown at what stage as well as the physical interactions. This is in place of an insanely complex wireframe. I’ll be checking back on this constantly to revise the scenario as I develop Modai.

Initial concepts
See this post in its natural habitat at: http://projectmodai.tarng.com

I’ve been in and out of designing throughout the whole research and problem setting phase. Here are some of those concepts.

This is the sketch that started it all. At the start of the semester, I really had no idea what to do for my project. I was thinking about mobile devices and sketched out some ideas: pie menu for app selection, showing carrier/signal status as part of the ID, using feedforward for the volume level by having the bar be right next to the speaker hole, physical tab popping up for notifications, mailbox arm to signal new mail…

ID

This is a collage of sketches I’ve done of the physical mobile device. I’m exploring materials, ways to make it look thinner, different shapes for functionality, the looks while charging.

UI

Home screen

I’ve mostly been focusing my efforts on UI. This is a very early sketch when I was focusing on visual design before I decided to make this project about humanizing Modai. I realized after doing a page of these that I wasn’t really learning anything and that visual design is fruitless unless there’s structure and reason underneath. I’m exploring pie menus, different lock screens, as well as ID that compliments the UI (button placement in the middle of the screen).

The following were done today and are done with the humanized Modai in mind. This is a sketch of the home screen interaction with the contextual frame (where Modai would list items that were relevant to the context: FB notifications, link to bus schedule when you’re at the bus stop, menu at the cafeteria…) as well as the relevant apps related to the contextual frame.

I broke the screen into 3 parts:

  1. Status (signal, time, etc)
  2. Main screen (context frame, apps)
  3. Modai (I’ve been thinking whether or not I should have an avatar of Modai at the bottom, similar to the Revive concept I posted earlier).

Notifications

I also thought about notifications. My initial idea (from the very first sketch) was a hardware tab that would pop up (magnetically) when there were notifications to be read. Pressing down on the tab would drop down a drawer with the notifications. The alternative is an Android-style drag-down interaction.

Switching paradigms

These interactions came about from the brainstorming session a few days ago. First is the Clark Kent zipping and unzipping work clothes metaphor. Second is a hardware/physical interaction, which I decided was either unnecessary or could cause accidental switching. The final concept is an avatar that you could “turn” into professional or fun paradigm. I don’t like any of the three, so I’ll keep thinking.

Multitasking/app launching

1. Modai as Agent
I thought about having Modai’s avatar as an agent for application management. If you drag him up any time, Modai will bring up a keyboard and show current running apps. If you start typing on the keyboard, it will start to display live search results of applications, contacts, emails, as well as quick shortcuts to start a note or send an email (see WebOS 2.0’s JustType).

2. Microscope
A combination of WebOS’s cards and iOS’s icon multitasker. Dragging along the icons at the bottom would scroll through the cards at the top, “magnifying” the icon’s contents. You would close apps by dragging the card up.

As you can see, there’s an insane amount to design, and then to bring together. I’m a bit overwhelmed at the moment and keep going back and forth between different parts of the process. I hoped that by writing the scenario, I would become a little bit more focused. However, I’m still unsure what to do next. There’s so many things where the interaction is dependent on both ID and UI.

mmm…is that Bello Pro with League Gothic I spy?

I know the logo is a small thing compared to the overall project, but just from a typographical standpoint I think switching the script and serif font so it reads (Project Modai) would make a lot more sense in the context of what you’re doing. “Project” is the straightforward neutral information, “Modai” is what gives the project character. Also, you’ve got 4 different fonts going on which I think you can just keep at bello and league gothic. The differences make the little peeking “M” (gotham rounded?) pretty distracting even though it communicates what’s going on.

This is a little off-topic, but remember Tamagachi’s? Your avatar idea kind of reminds me of those and how fun (albeit annoying) they were.

finally some sketches! I don;t know what it is, but when I see a bunch of wireframes, my eyes start to glaze over, I’d much rather see the UI sketched out in those little whiteboard snapshots.

I’m liking the zipping and unzipping thing. That is a nice twist on the experience, and analogous to the way people get changed after work to go out… even if it is just taking the tie off to go for drinks.

You need a twist on that level on the ID side. Right now the physical object is very expected. Spend that much amount of time thinking about how to embody that in the physical, or how not to embody it… give us a wink and a smile through the product.

The smartphone is becoming an increasingly centralized trusted hub for our information, our communications, and our networks. I’d be interested in not just seeing another smartphone with an interesting UI (yawn), but a system of objects that allows you to interact with the intelligence of the smartphone in different ways.

Think of it with this assumption. In 5 years, your smartphone will have the processing power and memory of a macbook pro. What can you do with that? Does it dock with a monitor and become the brains of a workstation automatically altering its ui for that purpose? Does it doc with your hotel room and alter the environment? Does it become the intelligence in your car?

Or, does it not need any of that to do any of that? Will the way we work and play fundamentally change because of the tools we have?