WebRTC Test Day on June 21st

WebRTC Test Day on June 21st

Come help test WebRTC in Desktop Firefox and Firefox for Android this Friday on June 21st!

WebRTC Testing – Try out conversat.io and file bugs!

Hi Everyone,

With the Nightly and Aurora builds of Firefox, you should be able to take advantage of existing chat applications that make use of WebRTC-based APIs. One of the applications I’d recommend trying out for testing out these APIs is called conversat.io. With this application, you can create a room that can house up to six people and conduct a normal meeting over video chat that relies on WebRTC-based APIs.

So how can I help test WebRTC-based APIs using this application? The first is that you can dogfood the application in your day to day life scenarios that involve video chat. Such examples of this include:

  • Conduct a small public meeting for an open source project
  • Conduct a 1:1 meeting with a family member
  • Talk with your friends over video chat

The second is you could run test cases directly using this application across Firefox Nightly and Aurora builds and Chrome. Here’s an example set of test cases I’d recommend trying out:

  • Start a video call between Firefox and Chrome with a room you’ve created in conversat.io. Now, talk back and forth in the call. Are you hearing the sound you just said in Firefox on Chrome? Vice versa?
  • Conduct a long-running video call with two Firefox tabs that goes for 20 minutes. After 20 minutes past, is the video from each camera still running correctly? If I generate sound, do I still hear it in one Firefox tab from a different Firefox tab?
  • Say I start a video call between two Firefox instances on different machines. Now, after the video call has ran for a few minutes, one of the Firefox instances closes. Does the other Firefox instance still in video chat not crash? Does the remote video stream stop?
  • Start a video call on two different machines running Firefox on different wifi networks for five minutes. Is the video stream still running between both Firefox instances correctly?
  • Start a video call with six different tabs in Firefox running off of the same conversat.io room. Does the video stream come through cleanly on each Firefox tab?

For any issues you run into while conducting video chat with WebRTC APIs in Firefox, feel free to either file a bug here or email me directly with your feedback.

Stub Installer in Firefox Nightly – Try it out, Give feedback, and Test it!

Hi Everyone,

We need your help testing out the new Stub Installer for Mozilla Firefox on Nightly! The stub installer is a new installer for Firefox that aims to streamline the installation process for our end-users by allowing them download a very small executable, run it, and get all of the resources downloaded and installed immediately. With this feature, we will make the installation of Desktop Firefox builds faster and easier for Windows users.

Want to try out an early build of the feature? Here’s how you get started:

  • If it’s before October 8th, you’ll want to use this link to get a build of the stub installer
  • If it’s on or after October 8th, you’ll want to use this link to get a build of the stub installer

If you run into any immediate problems please report the bug here or email stub-feedback@mozilla.com. Note that this stub installer is still in testing and is currently English only.

If you would like to help test this feature in more depth, then try running of the test cases you see below. If you hit any problems, feel free to use the above links for reporting a bug and providing feedback. I would greatly appreciate the help in testing this feature! Feel free to email the email alias above if you have any questions.

  • Install Firefox with the default installation rules with admin privileges – Verify firefox is installed to the default installation directory (C:\Program Files\Mozilla Firefox for 32-bit, C:\Program Files (x86)\Mozilla Firefox for 64-bit) with the same contents from the old installer, the firefox.exe is signed (Use a diff program to compare contents of directories like windiff)
  • Do an export of the Firefox related 1) HKLM and 2) HKCU entries in regedit from an installer just before the stub installer.  Do the same with the stub installer.  Use windiff or another diff program to compare each set of 2 exports are the same.
  • Install Firefox with the stub installer. Then, start firefox up. Verify firefox starts up with no unexpected errors.
  • Using an installation of Firefox  from the stub installer, crash firefox. Verify that breakpad appears.  Then, submit the crash report. Verify that the crash report was sent to the crash stats server.
  • Install Firefox with the stub installer that is an old FF version. Then, update Firefox. Verify firefox updates to the latest version of the particular release channel.
  • Uninstall Firefox that was made using the stub installer. Verify that the installation directory is removed along with any start menu/desktop shortcut references including the pinned to taskbar shortcut.
  • After launching firefox built from the stub installer, quit it. Verify firefox shuts down with no process running in the background.
  • Open three new tabs in a launched firefox from the stub installer and load a website in each. Verify the content comes up for each tab.
  • Install Firefox with the stub installer. Then, launch firefox and install an add-on. Verify that the add-on was successfully installed and runs correctly in the context of Firefox.
  • Try installing firefox as a  guest that does not have write permissions to the Program Files folder  on Windows. Verify that the stub installer fails with an appropriate  error saying that installation failed with an appropriate error message  indicating why. (Should we be allowing limited user accounts to install into their user account directory?)
  • Try installing firefox without an internet connection. Verify that the stub installer fails with an appropriate error saying that the downloading phase failed due to not having an internet connection.
  • Conduct a custom installation of firefox by changing each default preference used for the installer to some alternative valid value (i.e. change the installation directory, don’t allow start menu shortcuts). Verify that firefox installs according to the custom installation prefs set by the user.
  • Try installing firefox as an admin while an antivirus is running (e.g. Norton) with default preferences on the antivirus. Verify that the stub installer installs firefox successfully with the antivirus not setting off red flags that something isn’t right.
  • Install firefox with the old installer. Then, pave-over install this installation with the stub installer. Verify that the stub installer successfully installs firefox with no weird behavior or unexpected issues.
  • Install firefox with the stub installer.  Then, pave-over install this installation with an older installation. Verify that the older installation overwrites each piece of the stub installer, launching it shows no errors, and no unexpected errors is seen.
  • Install firefox with the stub installer.  Then, pave-over install with a different version of the stub installer. Verify that the firefox installation is successful and can be launched, no issues seen in the resulting directory structure of the installation.
  • On an old build Firefox, install an add-on. Then, pave-over install this installation with the stub installer. Launch firefox. Verify that the add-on is still installed and operates as it’s expected to.
  • Test installation on Vista with UAC on/off.  Test on Windows 7 with UAC at each level.  Test with Windows 8 at each UAC level, but in particular with UAC off since UAC works differently on Windows 8 when it is off.
  • Check what happens when you try to download and run out of disk space
  • Turn off the download server the stub installer references. Try to install Firefox. Verify that the stub installer fails with an appropriate error saying that it could connect to the server.
  • Setup a HTTP proxy-based tool to capture incoming HTTP requests (e.g. fiddler). Try to install firefox. When the HTTP response is sent back, capture it and fuzz the response. Then, send it to the stub installer. Verify that the stub installer fails gracefully with no weird behavior or unexpected errors.
  • Start up the stub installer, start installation, and immediately lock the screen for a few minutes. Then, unlock the screen. Verify firefox still installs sucessfully with no unexpected errors.
  • Start up the stub installer, start installation, and immediately put the machine into hibernation for a few minutes. Take the machine out of hibernation. Verify that installation finishes successfully with no unexpected errors.
  • Start up the stub installer, start installation, and immediately put the machine into standby for a few minutes. Take the machine out of standby. Verify that installation finishes successfully with no unexpected errors.
  • Start two stub installers at the same time – this scenario may happen with an auto/manual-download
  • Experience of initiating the install from IE or Chrome, to make sure it hasn’t regressed since the normal installer

Desktop and Android Web Apps Test Day – 6/29

Hi Everyone,

On June 29th, the Desktop and Mobile QA teams will be hosting a test day for web apps integration into desktop and android and apps in the cloud for desktop to run MozTrap test cases against each feature, exploratory test each feature, verify fixed bugs, and build new test cases for the feature. The necessity to complete these tasks is important, as doing these tasks gives a clearer picture of what works and does not work, what new problems exist, and what new areas we can test we can test in these features. In completing these tasks, our team will then be able to better assess the health of this feature, since more problems with the feature will be identified to each driver of each feature.

What is the Web Apps project? An excellent quote from our MDN documentation here summarizes what this project is:

“The Open Web Apps project enables developers to create rich HTML5 app experiences that run across multiple devices and form factors  (desktop, mobile, tablet, etc.) using Web standards and open technologies such as HTML5, CSS and JavaScript. “

Interested in this product and helping out? Then, you should join us on IRC on #testday. Upon helping out, you’ll gain great experience in finding bugs, running test cases, and more!

Everyone is welcome to participate. There’s no experience required. We’ll have moderators to answer any questions and guide you throughout the day.

If you want to get a head start check the test plan. Details will be posted there as they materialize.

If you aren’t available to help on Friday, email me or contact me (jsmith) on IRC on #qa.

Thanks in advance!

Desktop Web Apps – Landed on Nightly, Try it Out!

Hi Everyone,

We’ve landed support for the major feature I’m testing on Nightly – Desktop Web Apps Support on Windows, Mac, and Linux! For the Linux support, I’d like to thank our contributor Marco for working hard to deliver support for desktop web apps on Linux. See his wiki here to see what he has done so far for Linux web apps support and what he plans to do in the future.

Additionally, the Mozilla Marketplace is live and ready for testing with the desktop web runtime on Nightly for Mozillians only. If you are an active Mozillian and want to help test the Mozilla Marketplace or desktop web apps on any operating system, then go to this wiki to see how you can get started testing the Mozilla Marketplace with the desktop web runtime. For anyone who is not a vouched Mozillian, follow the instructions in this post to help get yourself started with testing desktop web apps. Email this feedback alias on your opinion on desktop web apps. Want to get more involved testing at Mozilla? Then, let me know what you are interested in by email! Here’s some references to how you can help out:

Have any questions? Then, join #qa and ping me (jsmith) or email me directly with your questions.

Sincerely,
Jason Smith

Apps Mobile Web Compatibility – Gecko vs. Webkit

This month, the Apps QA team has been working with the Mobile Firefox QA team to assess the compatibility of mobile apps on webkit (e.g. Android Stock Browser, Chrome for Android) vs. gecko (e.g. Firefox for Android). The underlying reason we have analyzed this is because in the past, the Mobile QA team has noted many web compatibility problems specific to gecko, such as a desktop site rendering instead of a mobile site. Knowing that this has been a past problem, the Apps QA team then knows it affects us too, as the platform mobile apps will run on gecko.

To analyze these apps, our Apps QA team has been looking at 130+ top sites designed to be web applications. In analyzing these 130+ top sites, our team used a funnel-based test approach to quickly assess a large amount of sites across webkit and gecko. This funnel test approach breaks down into three distinct steps. The first step is to look at screenshots of the mobile site on Firefox for Android vs. Chrome for Android for each app using an automated screenshot generation script Aaron Train created here.  When we look at the screenshots, we are looking for issues such as the URL for the top app not loading anything, user agent sniffing, or a desktop site rendering. For user agent sniffing specifically, we detect this issue if we notice that webkit renders a completely different site than gecko, such as webkit rendering a mobile site on linkedin.com and gecko rendering a desktop site on linkedin.com. To confirm that user agent sniffing is occurring, we can use a Firefox for Android extension called phony to change the user agent to webkit on Firefox for Android to see if a different site renders. If a different site renders, then we know that user agent sniffing is evident with the web application. After checking for user agent sniffing and other issues through the screenshots, our team makes a judgment call if more analysis needs to take place such as noticing that there is more functionality within the app and the app is not broken.

If we determine more analysis is needed, our apps QA team then does a subjective quality analysis of the applications on the web on Chrome for Android and Firefox for android. To do this subjective quality analysis, our team would launch the app and use some of the underlying functionality of the app, such as logging in, recording music if it was a music app, viewing status updates if it was social networking app, etc. Upon using the app’s functionality on gecko and webkit, our team would classify the app into three different buckets below similar to what was used for the subjective apps quality analysis.

Excellent

  • Supported on phone and tablet
  • Look and feel adapts to each device’s requirements, great looking for the platform

Good

  • App functionally works or mostly works as expected
  • User responsiveness is okay to good
  • Non-optimized mobile or sniff the user agent badly, but still relatively usable

Poor

  • App won’t render, won’t work entirely, unusable

Upon classifying the apps for webkit vs. gecko, our team would then determine if there are still open questions left that are needed to judge the app’s quality. These questions would be addressed on a case by case basis. Then, knowing the app quality on gecko vs. webkit, we would then log bugs that are classified as either functional problems within gecko, evangelism problems for gecko, or general evangelism problem for the apps experience. A bug classified as a functional problem from gecko typically came from the team seeing a layout problem on gecko that was correctly implemented by the developer,  but incorrectly implemented within gecko. A bug classified as an evangelism problem with gecko means that the team needs to evangelize to the developer of the app what they need to change to make their app mobile-friendly for Firefox for Android for users, such as supporting the Firefox for Android user agent, utilizing gecko-specific CSS prefixes, and more. Last, a bug classified as a general evangelism problem occurs when the app itself does not work in a mobile web environment, such as a desktop site rendering on mobile browsers, broken functionality in a mobile browser, and more. After bugs are filed, our team continues to track these issues overtime and re-assess the quality of the website on gecko vs. webkit.

I’m interested to hear from web developers or layout engine developers opinions on what we can do to assess mobile web compatibility, to evangelize to web developers, and to fix within existing layout engines such as gecko and webkit. What other techniques can provide a deeper assessment of mobile websites across different layout engines? What web development techniques should be evangelized to developers to allow their websites to be runnable on any layout engine? What should be fixed within gecko to allow mobile websites optimized for webkit to additionally be mobile-friendly for gecko?

Subjective Quality Analysis of Apps on the Open Web Apps Project

In late January, the Apps QA team was assigned to determine which apps in a certain list of apps sorted by business needs could be demoed at mobile world congress. The challenge of solving this problem is knowing that you have determine the quality of a list of apps on many different platforms, allow them to be compared against easily, and know what to do to improve the app experience for those apps. Improving the app experience includes asking developers of the Apps project to fix bugs and asking the creators of the app to fix problems related to their app specifically. Additionally, you need to pay attention to the underlying business needs behind each app, as the success of the app on your platform mutually benefits the platform as a whole by increasing the use of the platform and attracting more people toward the platform to create and use apps. The target problem to solve is the following:

How do you assess the quality of an app across different platforms and identify ways to improve app experience quality?

The solution I used to solve this problem was to develop a subjective measurement of the app’s quality overall on the phone, tablet, web, and the native desktop. The subjective measurement consisted of the following four different categories: Excellent, Good, Fair, and Poor. Below I summarize a description of each category I used for subjective app quality analysis.

Excellent

  • Cross-device support (works on all devices)
  • Look and feel adapts to each device’s requirements (phone, tablet, desktop)

Good

  • App functionally works as expected (no functional errors on each device)
  • Relatively usable across devices, even if look and feel doesn’t match each device’s requirements exactly (e.g. desktop only site, but functional on mobile, device screen-size)
  • User responsiveness is good (pages load in a reasonable amount of time)

Fair

  • App partially works or is only partially usable (some clicks on app work, others generate incorrect behavior, rendering makes app hard to use)
  • User responsiveness is not that great

Poor

  • App won’t render, won’t work entirely, unusable (doesn’t function correctly)

When the categories were defined for the app on each platform, I then provided a rationale behind why the quality level was specified. For example, one app with an excellent quality level was described to have an “excellent look and feel, easy to use, functional, no issues, app-like.” Another example with an app with a poor quality level had a rationale documenting that the app did not ever load a live stream to allow the app to be used.

With a rationale specified, the next steps was to figure out what could be done to improve the quality of the app. The two quality improvement strategies I employed looked at what the developers of the platform could do and what the developers of the app could do to improve the app experience. For the platform developers, this typically involved specifying certain bugs that needed to be fixed on respective platforms to increase the app experience quality. For app developers, this involved providing accurate, detailed feedback to the developer on what needs to be changed to improve the app experience on our platform.

With these actions items specified, I then moved forward to track these issues overtime to understand the status of each of the issues overtime. For platform issues, I flagged certain bugs as important to our demo to clearly tell the developers that these bugs are especially important to being fixed to allow the quality of the demo to improve with a certain set of apps. For app developer action items, I kept an ongoing backlog of issues specific to certain apps and their status while I was communicating back and forth with the developer to ensure that the quality of the app is improved for the platform. As the fixes to the issues came in for a particular app, I would then perform a re-evaluation of the app subjective score for the platforms affected.

After going through this process, I can now see that I saw success in subjectively analyzing the apps, as I did end up with a list of apps that could be effectively demoed. Going in the future, I plan on continuing to use this technique to evaluate app quality in future milestones after seeing the success of this technique. Seeing the success of this technique, I’m interested to hear what people think about subjective quality analysis approaches you have used to assess quality. What subjective quality approaches have you used? Was the approach successful? Why was it successful? What value did the subjective analysis provide on your project?

Technology Reflection: Sikuli in Testing the Open Web Apps Infrastructure

One of my first tasks upon starting work at Mozilla was to enhance an existing prototype framework to test the Open Web Apps infrastructure. This infrastructure was built with a tool called Sikuli, which uses image recognition to find particular areas of a user interface and perform a certain set of actions such as clicking a button that looks like a certain image. The infrastructure was originally designed to work with Mac OS X, but needed to evolve to support other operating systems such as Windows 7. I then began making changes to the infrastructure, such as allowing the code base to use imagery specific to Windows 7 and fixing any quirks that caused platform-specific issues on Windows. Upon developing an initial working solution, it was then to be enhanced by the Mozilla community at a test day, or a day where the community and full-time employees directly work together to solve a particular set of current quality assurance problems. However, the test day showed that the infrastructure was not robust, as it ran into issues such as not being able to run under different Windows 7 themes and having inconsistent behavior on different screen resolutions.

In reflecting on this experience, I question now if image recognition is an effective mechanism to rely upon to perform tests across a variety of machines running under different operating systems and other specifications. The rationale behind this is that the testing framework developer has to deal with the overhead of handling all possible customizations of each operating system if he/she expects the framework to run on any possible machine, which is a requirement of the testing infrastructure. For example, what happens if my desktop icons on my machine are large, but on someone else’s machine, the icons are small? The infrastructure then needs to be able to resize imagery based on the machine’s specific settings, which is significant overhead to implement. In the Sikuli tool specifically, our team noticed it mainly matches images off an accuracy percentage, but we did not come across a way to handle the machine-specific issues we needed to deal with. As a result, Sikuli in this situation with our infrastructure does not offer reliability, which is necessary to be able to accurately capture when functionality is and isn’t working consistently across many test runs.

Note that I do think Sikuli in itself makes the development of user interface automation quite simple. For example, typical requirements for our test cases usually just required building screenshots of different portions of the user interface and telling Sikuli to find them, decide if they exist, and click on them. As a result, code requirements were as simple as loading an image and sending it to a specific Sikuli function to perform the action required (e.g. click). As a result, the tool itself benefits from simplicity, making the barrier to entry to learn and build working scripts specific to your machine quite low. This simplicity is important especially to a team and a community building the code base, as it reduces the time overhead requirement to be able to make an effective contribution to improve the code base in itself.

Knowing now that our testing infrastructure requires both low barrier to entry and reliability across various platforms, our team is re-thinking our approach to building our test infrastructure. Some questions as a result that need to be answered are:

  • Are there other tools that could better fit our needs?
  • Do we need to consider building out specialized tooling to support simplicity and reliability?
  • Are there other considerations we know now that we also need to pay attention to in designing the test architecture?

I welcome any thoughts on what people think about designing a test infrastructure for simplicity and reliability. What do you think makes a software development tool have a low barrier to entry? What allows a developer to confirm that a software development tool has reliable behavior in the context of his/her project?

Work Culture Differences: Xerox, Mozilla, and Distributed Teams

After working at Mozilla for two weeks, I am really starting to see some distinguished differences between different places I’ve worked, notably at Mozilla. In today’s post, I’ll focus on comparing my internship experience at Xerox to my first two weeks at Mozilla and analyze Mozilla’s distributed team environment.

At Xerox, I was initially exposed to a cohesive team environment with a small team focus (~7 people per team typically), casual attire for engineers, and a forty hour work week. Above the small 1st line teams, the company executives definitely prioritized business objectives over engineering objectives. On one side, I definitely liked the feeling of a small team, as I could easily connect with other people, their concerns, and figure out how to have the team work as a unit. However, I thought the execution of the company business focus had its negatives. For instance, people would sometimes go into meetings to say what managers wanted to hear during feedback sessions, rather than their true opinion. A particular example was when I went to a lunch with a group of interns with one of the Xerox executives. The Xerox executive asked a bunch of feedback questions, in which interns began to respond with very business-like answers that the executive wanted to hear, rather than the truth. As a result, I felt uncomfortable, as I did not feel I could express my true opinions in an open discussion with an executive, so I keep my mouth shut. This example stemmed in other areas, such as meeting the CEO of the company and hearing an executive make business-like, silver bullet arguments for agile in all of their software teams. After going through this internship experience, I definitely developed a value for the “small team” feeling, but a complete disgust for some of the business-like arguments and discussions made by the company executives.

In comparing this experience to my first two weeks at Mozilla, I noticed that the company was definitely the opposite of a corporate business-like culture. For instance, one public meeting with the QA team showed that people were very open in bringing up what they liked and even what they didn’t like right in front of a set of managers openly. The managers I thought were very open to promote new ideas and try them out to solve past problems during this meeting. As a result, I felt that people in the meeting were able to express their thoughts freely, even if they brought up critiques about things that they did not like about how processes were currently established. Thankfully, I also noticed that Mozilla does seem to value the “small team” focus, as my main QA team in Mozilla Open Web Apps is only four people. However, a transition I think I will need to work with in Mozilla’s team culture is the fact that their teams are highly distributed across the world.

A question that I then need to find the answer to is the following: How does a small team maintain cohesiveness in a distributed team environment? I know Mozilla has IRC heavily established in the various team environments, but I think messaging without face-to-face communication just does not have the same feeling as face-to-face communication. Instead, I think that style of messaging risks a communication barrier in which both parties on each end of the messaging cannot capture some of the non-verbal expressions shown by each team member. A mitigation Mozilla appears to have established is video chat for meetings or discussions that have a high need for verbal communication. In my opinion, I do think this helps with allowing each person in a communication channel to capture some non-verbal cues. However, what about the cases when video chat isn’t be used, such as the random discussions that occur in an office? How can the person in a distributed environment be able to naturally jump into the conversation to provide their opinion? How does the distributed team member then feel closely connected to their team members? An idea suggested by a team member of mine was to have a dedicated time for our QA team to work together, which I think is a step in the right in the direction to mitigate this issue. I do wonder if the theme behind this idea, also known as “team collaboration sessions,” could be used more frequently to address the challenges of cohesiveness in a distributed team environment. As a result, I could see this helping make the team act as a unit, rather than a set of individuals blocked by the distributed communication barrier. However, I do think in order for it to be effective, it has to be both frequent and consistent, even though there are challenges to deal with on the outside such as finding a time when everyone can get together.

I’m interested in hearing responses to what people think about what was discussed above or your own opinions on the matter. What are your thoughts on cultural differences across different companies? How does a team act effectively and cohesively in a distributed team environment?

Follow

Get every new post delivered to your Inbox.