Ux Design cross-channel
Ux Design cross-channel
Sometimes the objective is having the same functionality cross-channel and cross-platform, what brings a new challenge to design teams. You can follow a straigh path, where a click means a touch when changing your device, but in some cases, getting more from each context leads you to better and more ussable solutions. The key point is that people will go to your app or web using different devices, and make them usable in every device requires time and effort.
Sometimes, trying to have usable products for all platforms leads to minimizing functionality. Even most of actual guidelines for working in multiplatform designs are doing the mobile aproach first. When you work this way, mobile constraints make you avoid having useless icons and info that doesn’t bring to much to the user. It seams a good suggestion, but sometimes this is just not possible. In some projects desktop platform is already online, probably with a lot of years of life and a lot of regular users. In other projects, offline workflows lead the decisions you make when working on desktop and mobile.
It seams like a difficult path to follow, but achieving this goals provides you with really easy, usefull and funtional designs. Usually, trying to brign desktop tools to mobile requires bearing in mind exclusivelly options each platform brings to you. Some of the last projects I have been involved in, we needed to give easy flows to work with multiple items, and different platforms suggested that we use different solutions. In desktop Control/Command Key is commondly used to multiselect items, but this is not the answer when we think of mobiles. Mobile standard tends to be long-tap, but this is something that not always work well if we think in responsive browser solutions.
Many different options are in the toolbox, like toggles or “add-item” paths. In the end, testing different solutions with the actual users and potencial users of your sistems is probably the best solution.
In mobile we cannot forget the bunch of gesture options that we have at our disposal: long-taps, pinch open/close, shaking device… and also the options we have for dragging from different parts of the screen. A lot of options are at our disposal and we must think that many of them will be progressilly asociated to certain functions between different apps.
In addition, we cannot forget about the small things that mobile users accept and assume as rules. It’s becoming more frequent to use the GPS in the phone to geolocate maps, but also some gestures out of the screen are being considered and tested in applications like Waze, and some possibilities like pay-trought phone like in Uber.