Posts in Learning about the pro...
Using Firebug in Internet Explorer
This tip comes from Ben Kelly. We all need to test on Internet Explorer, but we've been spoiled with tools like Firebug in other browsers. Ben has a tip to get Firebug to work in Internet Explorer:
In IE, go to your Favorites bar and copy-paste the following chunk of code and call it "Firebug". (Jonathan: In IE 7, I created a bookmark, then right-clicked on the properties, and then pasted the following code into the URL field.)

javascript:(function(F,i,r,e,b,u,g,L,I,T,E){if(F.getElementById(b))return;E=
F[i+'NS']&&F.documentElement.namespaceURI;E=E?F[i+'NS'](E,'script'):F[i]('sc
ript');E[r]('id',b);E[r]('src',I+g+T);E[r](b,u);(F[e]('head')[0]||F[e]('body
')[0]).appendChild(E);E=new%20Image;E[r]('src',I+L);})(document,'createEleme
nt','setAttribute','getElementsByTagName','FirebugLite','4','firebug-lite.js
','releases/lite/latest/skin/xp/sprite.png','https://getfirebug.com/','#star
tOpened');


Then on any website, open that bookmark and it will give you a Firebug console.

Apparently, it seems to work pretty well for CSS and DOM stuff, but has some limitations when it comes to script debugging.

Enjoy.
Test ideas that come from test automation
I know it's not fashionable to like GUI-level test automation any longer. But whatever, I still like it. I'm unfashionable in more ways than one. I still like GUI-level automation for reproducing bugs, automated acceptance testing, and to support my performance and exploratory testing. I also like non-GUI tests, but I've never disliked GUI automation.

One reason I still write GUI-level test automation is because it helps me learn about the product. I'm still amazed at how many times I say "Wow, really?" when I'm writing my tests. Because I'm always picking at the GUI with tools like FireBug and Web Developer while I'm testing, I'm seeing things I don't normally see when I'm just clicking around.

For example, today I noticed:

  • One of the applications I'm testing doesn't remove fields from the screen when they aren't active, it just hides them. I had never noticed until I counted on it not being there in my code.

  • One of the applications I'm testing sometimes shows a parent child relationship using icons, and sometime doesn't.  I had never noticed until I coded a rule expecting it to always be there.

  • One of the applications I'm testing appears to have a relationship between fields that I wouldn't have expected. I discovered this based on field naming conventions.


All of these give me new test ideas completely unrelated to my automated tests. Anytime I'm surprised, I use that as an indicator that I have more tests to run. When I automate at the GUI-level, I often get surprised. I rarely get surprised when I'm automating against an API.
Gathering data and stats for performance testing
alexaSometimes when performance testing you need to do some research into other websites so you can make predictions on how people will use yours. This can be a time consuming. You need to think of sites that will have similar users with similar usage patterns. They need to be roughly the same size and scope for the data to be transferable. It's not easy.

Once you've thought of some sites, you then need to get stats. There are a lot of ways and places to get those stats. Some of them reputable, some of them not. One place I turn to for some basic stats is Alexa. They seem to have good data, it's a very easy site to use, and I always find out a bit more than I was looking for.
Reviewing architecture documents
After a grueling round of reviewing some long architecture documents for a new project, I've noticed that I have some required tools and habits I've developed for a first review:

  • Different colored pens and highlighters (where different colors indicate different things like follow-up, possible issue, quality criteria, testing feature, etc...)

  • Post-It tabs are a requirement for any document over 20 pages (where different colors again indicate different things to follow up on)

  • A fresh notebook for the project for notes, ad-hoc drawings, and questions

  • Google - to research abbreviations, terms, and technologies I don't know

  • A photocopier for diagrams that I want to have handy (but don't want to remove from the document without replacing with a copy)


Once completed with an initial review, I'll try to follow up on some of the research tasks (learning new tools, technologies, etc...) right away while it's fresh in my head. Then I'll draft emails full of questions off to various people in the project - sometimes sharing some of my sketches to make sure I understood things correctly.
Just hit refresh
I was talking a look at Google product search and wanted to think of the simplest test I could that might reveal a lot of information. After the page loads, if you simply hit the refresh button in your browser repeatedly, you'll be able to notice the following behaviors:

  • query response times change each time

  • some sponsors change each time, while others don't

  • column width (between product description and price) changes based on sponsor size


This gives me several ideas for testing and learning about the product. First, I feel like I could quickly program a script to track sponsor results and performance over time time. If I varied the search criteria for similar products, this could quickly be used to start to verify the accuracy of  adds and the rules for displaying them. This could also become a good no-load baseline for the performance of whatever environment you're testing in.

Understanding the relationship between sponsor text (number of characters) and column widths would be worth looking into. Might be an issue, might not (likely not and issue). But it's also something that can be verified quickly and repeatedly with the script that's pulled together.