Desktop and Android Web Apps Test Day – 6/29

Hi Everyone,

On June 29th, the Desktop and Mobile QA teams will be hosting a test day for web apps integration into desktop and android and apps in the cloud for desktop to run MozTrap test cases against each feature, exploratory test each feature, verify fixed bugs, and build new test cases for the feature. The necessity to complete these tasks is important, as doing these tasks gives a clearer picture of what works and does not work, what new problems exist, and what new areas we can test we can test in these features. In completing these tasks, our team will then be able to better assess the health of this feature, since more problems with the feature will be identified to each driver of each feature.

What is the Web Apps project? An excellent quote from our MDN documentation here summarizes what this project is:

“The Open Web Apps project enables developers to create rich HTML5 app experiences that run across multiple devices and form factors  (desktop, mobile, tablet, etc.) using Web standards and open technologies such as HTML5, CSS and JavaScript. ”

Interested in this product and helping out? Then, you should join us on IRC on #testday. Upon helping out, you’ll gain great experience in finding bugs, running test cases, and more!

Everyone is welcome to participate. There’s no experience required. We’ll have moderators to answer any questions and guide you throughout the day.

If you want to get a head start check the test plan. Details will be posted there as they materialize.

If you aren’t available to help on Friday, email me or contact me (jsmith) on IRC on #qa.

Thanks in advance!

Advertisements

Desktop Web Apps – Landed on Nightly, Try it Out!

Hi Everyone,

We’ve landed support for the major feature I’m testing on Nightly – Desktop Web Apps Support on Windows, Mac, and Linux! For the Linux support, I’d like to thank our contributor Marco for working hard to deliver support for desktop web apps on Linux. See his wiki here to see what he has done so far for Linux web apps support and what he plans to do in the future.

Additionally, the Mozilla Marketplace is live and ready for testing with the desktop web runtime on Nightly for Mozillians only. If you are an active Mozillian and want to help test the Mozilla Marketplace or desktop web apps on any operating system, then go to this wiki to see how you can get started testing the Mozilla Marketplace with the desktop web runtime. For anyone who is not a vouched Mozillian, follow the instructions in this post to help get yourself started with testing desktop web apps. Email this feedback alias on your opinion on desktop web apps. Want to get more involved testing at Mozilla? Then, let me know what you are interested in by email! Here’s some references to how you can help out:

Have any questions? Then, join #qa and ping me (jsmith) or email me directly with your questions.

Sincerely,
Jason Smith

Apps Mobile Web Compatibility – Gecko vs. Webkit

This month, the Apps QA team has been working with the Mobile Firefox QA team to assess the compatibility of mobile apps on webkit (e.g. Android Stock Browser, Chrome for Android) vs. gecko (e.g. Firefox for Android). The underlying reason we have analyzed this is because in the past, the Mobile QA team has noted many web compatibility problems specific to gecko, such as a desktop site rendering instead of a mobile site. Knowing that this has been a past problem, the Apps QA team then knows it affects us too, as the platform mobile apps will run on gecko.

To analyze these apps, our Apps QA team has been looking at 130+ top sites designed to be web applications. In analyzing these 130+ top sites, our team used a funnel-based test approach to quickly assess a large amount of sites across webkit and gecko. This funnel test approach breaks down into three distinct steps. The first step is to look at screenshots of the mobile site on Firefox for Android vs. Chrome for Android for each app using an automated screenshot generation script Aaron Train created here.  When we look at the screenshots, we are looking for issues such as the URL for the top app not loading anything, user agent sniffing, or a desktop site rendering. For user agent sniffing specifically, we detect this issue if we notice that webkit renders a completely different site than gecko, such as webkit rendering a mobile site on linkedin.com and gecko rendering a desktop site on linkedin.com. To confirm that user agent sniffing is occurring, we can use a Firefox for Android extension called phony to change the user agent to webkit on Firefox for Android to see if a different site renders. If a different site renders, then we know that user agent sniffing is evident with the web application. After checking for user agent sniffing and other issues through the screenshots, our team makes a judgment call if more analysis needs to take place such as noticing that there is more functionality within the app and the app is not broken.

If we determine more analysis is needed, our apps QA team then does a subjective quality analysis of the applications on the web on Chrome for Android and Firefox for android. To do this subjective quality analysis, our team would launch the app and use some of the underlying functionality of the app, such as logging in, recording music if it was a music app, viewing status updates if it was social networking app, etc. Upon using the app’s functionality on gecko and webkit, our team would classify the app into three different buckets below similar to what was used for the subjective apps quality analysis.

Excellent

  • Supported on phone and tablet
  • Look and feel adapts to each device’s requirements, great looking for the platform

Good

  • App functionally works or mostly works as expected
  • User responsiveness is okay to good
  • Non-optimized mobile or sniff the user agent badly, but still relatively usable

Poor

  • App won’t render, won’t work entirely, unusable

Upon classifying the apps for webkit vs. gecko, our team would then determine if there are still open questions left that are needed to judge the app’s quality. These questions would be addressed on a case by case basis. Then, knowing the app quality on gecko vs. webkit, we would then log bugs that are classified as either functional problems within gecko, evangelism problems for gecko, or general evangelism problem for the apps experience. A bug classified as a functional problem from gecko typically came from the team seeing a layout problem on gecko that was correctly implemented by the developer,  but incorrectly implemented within gecko. A bug classified as an evangelism problem with gecko means that the team needs to evangelize to the developer of the app what they need to change to make their app mobile-friendly for Firefox for Android for users, such as supporting the Firefox for Android user agent, utilizing gecko-specific CSS prefixes, and more. Last, a bug classified as a general evangelism problem occurs when the app itself does not work in a mobile web environment, such as a desktop site rendering on mobile browsers, broken functionality in a mobile browser, and more. After bugs are filed, our team continues to track these issues overtime and re-assess the quality of the website on gecko vs. webkit.

I’m interested to hear from web developers or layout engine developers opinions on what we can do to assess mobile web compatibility, to evangelize to web developers, and to fix within existing layout engines such as gecko and webkit. What other techniques can provide a deeper assessment of mobile websites across different layout engines? What web development techniques should be evangelized to developers to allow their websites to be runnable on any layout engine? What should be fixed within gecko to allow mobile websites optimized for webkit to additionally be mobile-friendly for gecko?

Subjective Quality Analysis of Apps on the Open Web Apps Project

In late January, the Apps QA team was assigned to determine which apps in a certain list of apps sorted by business needs could be demoed at mobile world congress. The challenge of solving this problem is knowing that you have determine the quality of a list of apps on many different platforms, allow them to be compared against easily, and know what to do to improve the app experience for those apps. Improving the app experience includes asking developers of the Apps project to fix bugs and asking the creators of the app to fix problems related to their app specifically. Additionally, you need to pay attention to the underlying business needs behind each app, as the success of the app on your platform mutually benefits the platform as a whole by increasing the use of the platform and attracting more people toward the platform to create and use apps. The target problem to solve is the following:

How do you assess the quality of an app across different platforms and identify ways to improve app experience quality?

The solution I used to solve this problem was to develop a subjective measurement of the app’s quality overall on the phone, tablet, web, and the native desktop. The subjective measurement consisted of the following four different categories: Excellent, Good, Fair, and Poor. Below I summarize a description of each category I used for subjective app quality analysis.

Excellent

  • Cross-device support (works on all devices)
  • Look and feel adapts to each device’s requirements (phone, tablet, desktop)

Good

  • App functionally works as expected (no functional errors on each device)
  • Relatively usable across devices, even if look and feel doesn’t match each device’s requirements exactly (e.g. desktop only site, but functional on mobile, device screen-size)
  • User responsiveness is good (pages load in a reasonable amount of time)

Fair

  • App partially works or is only partially usable (some clicks on app work, others generate incorrect behavior, rendering makes app hard to use)
  • User responsiveness is not that great

Poor

  • App won’t render, won’t work entirely, unusable (doesn’t function correctly)

When the categories were defined for the app on each platform, I then provided a rationale behind why the quality level was specified. For example, one app with an excellent quality level was described to have an “excellent look and feel, easy to use, functional, no issues, app-like.” Another example with an app with a poor quality level had a rationale documenting that the app did not ever load a live stream to allow the app to be used.

With a rationale specified, the next steps was to figure out what could be done to improve the quality of the app. The two quality improvement strategies I employed looked at what the developers of the platform could do and what the developers of the app could do to improve the app experience. For the platform developers, this typically involved specifying certain bugs that needed to be fixed on respective platforms to increase the app experience quality. For app developers, this involved providing accurate, detailed feedback to the developer on what needs to be changed to improve the app experience on our platform.

With these actions items specified, I then moved forward to track these issues overtime to understand the status of each of the issues overtime. For platform issues, I flagged certain bugs as important to our demo to clearly tell the developers that these bugs are especially important to being fixed to allow the quality of the demo to improve with a certain set of apps. For app developer action items, I kept an ongoing backlog of issues specific to certain apps and their status while I was communicating back and forth with the developer to ensure that the quality of the app is improved for the platform. As the fixes to the issues came in for a particular app, I would then perform a re-evaluation of the app subjective score for the platforms affected.

After going through this process, I can now see that I saw success in subjectively analyzing the apps, as I did end up with a list of apps that could be effectively demoed. Going in the future, I plan on continuing to use this technique to evaluate app quality in future milestones after seeing the success of this technique. Seeing the success of this technique, I’m interested to hear what people think about subjective quality analysis approaches you have used to assess quality. What subjective quality approaches have you used? Was the approach successful? Why was it successful? What value did the subjective analysis provide on your project?