memonic

JsTestDriver

Save

Introduction

JsTestDriver aims to help javascript developers use good TDD practices and aims to make writing unit tests as easy as what already exists today for java with JUnit.

Laying out Your Project

We have created a sample Hello-World project for you here: http://code.google.com/p/js-test-driver/source/browse/#svn/samples/hello-world

There are three things you need to set up your project with JsTestDriver:

  1. source folder: Although not strictly required we recommend that you create a source folder for your application. In our case we have named it src.
  2. test folder: Again not strictly necessary, but keeping your test code away from your production code is a good practice. In our case we have named it src-test.
  3. configuration file: By default the JsTestDriver runner looks for the configuration file named jsTestDriver.conf in the current directory. We recommend that you name it same way, or you will have to enter it as a command line option all of the time.

Writing your production code

You write your JavaScript production code as you normaly would. We have created a sample Greeter class for domenstration purposes.

myapp = {};  myapp.Greeter = function() { };  myapp.Greeter.prototype.greet = function(name) {   return "Hello " + name + "!"; };

Writing your test code

We have tried to follow the JUnit testing conventions as much as possible. If you are familiar with JUnit, than you should be right at home with JsTestDriver.

GreeterTest = TestCase("GreeterTest");  GreeterTest.prototype.testGreet = function() {   var greeter = new myapp.Greeter();   assertEquals("Hello World!", greeter.greet("World")); };

The JsTestDriver needs to know about all of your TestCase classes for this reason you declare a new test class using the notation below. Once the TestCase is declared it acts as a normal class declaration in JavaScript. You can use normal prototype declarations to create test methods. See TestCase for more information.

GreeterTest = TestCase("GreeterTest");

Test methods are declared on the prototype. Optionally you can declare setUp() and tearDown() method just as in JUnit.

GreeterTest.prototype.testGreet = function() {   var greeter = new myapp.Greeter();   assertEquals("Hello World.", greeter.greet("World")); };

Writing configuration file

If you are familiar with Java, you can think of the configuration file as classpath for JavaScript. The file contains information about which JavaScript source files need to be loaded into the browser and in which order. The configuration file is in YAML format.

server: http://localhost:9876  load:   - src/*.js   - src-test/*.js

The server directive tells the test runner where to look for the test server. If the directive is not specified you will need to enter the server information through a command line flag.

The load directive tells the test runner which JavaScript files to load into the browsers and in which order. The directives can contain "*" for globing multiple files at once. In our case we are saying to load all of the files in the src folder followed by all of the files in the srt-test folder. For more information see ConfigurationFile.

Starting the server & Capturing browsers

Before you can run any of your tests you need to start the test server and capture at least one slave browser. The server does not have to reside on the machine where the test runner is, and the browsers themselves can be at different machines as well.

Starting server on port 9876:

java -jar JsTestDriver.jar --port 9876

Then capture a browser by going to the URL of your JsTestDriver server, if running on localhost it should be:

http://localhost:9876

Click the link Capture This Browser. Your browser is now captured and used by the JsTestDriver server.

You can also directly capture the browser by using the URL:

http://localhost:9876/capture

You can also tell the server to autocapture the browser by providing a path to the browser exectable on the cammand line. Multiple browsers can be specified if separated by a comma ','.

java -jar JsTestDriver.jar --port 9876 --browser firefoxpath,chromepath

For full line of command line flags refer to CommandLineFlags.

Running the tests

Now that we have server running and at least one browser captured we can start running tests. Tests can be executed from the command line using:

java -jar JsTestDriver.jar --tests all

As long as the jsTestDriver.conf file is present in the current directory the test runner will read it and use it to locate the server and the files which need to be loaded into the browser. It will then load any files (which have changed) into the browser and run your tests reporting the results an standard out.

Total 2 tests (Passed: 2; Fails: 0; Errors: 0) (0.00 ms)   Safari 528.16: Run 1 tests (Passed: 1; Fails: 0; Errors 0) (0.00 ms)   Firefox 1.9.0.10: Run 1 tests (Passed: 1; Fails: 0; Errors 0) (0.00 ms)

Automatically running tests in Eclipse

For discussion how to set up your tests to run automatically on save see: http://misko.hevery.com/2009/05/07/configure-your-ide-to-run-your-tests-automatically/

 

GTAC 2009 - JsTestDriver

Save

Google Tech Talk October 22, 2009 ABSTRACT Presented by Jeremie Lenfant -Engelmann, Google, at the 4th Annual Google Test Automation Conference, October 21st, 22nd, 2009, Zurich, CH The proliferation of JavaScript unit-testing frameworks in the JavaScript community shows that no one has yet found the magical combination of features to make JavaScript testing a no-brainer.

JavaScript testing

Save

JavaScript testing

This post about JavaScript testing is part of the Setting up a full stack for web testing series.

Introduction

JavaScript tests are sufficiently special to deserve a post outside of unit and functional testing. You need a special set of tools to write these tests. But then again for JavaScript the usual split between unit and functional tests is still applicable. So what I wrote about those two topics is also relevant here.

In this post I cover two separate tools for JavaScript testing: JsTestDriver, which is great for JavaScript unit tests, and Selenium which can be used to functional test a web application that uses JavaScript.

Selenium

Selenium is a software suite used to remotely control browsers. It consists of various sub-projects:

  • Selenium Core: This is the original Selenium. It allows you to create a HTML page with some browser commands. Those commands can then be run in the browser when you open the page.
  • Selenium IDE: A Firefox extension that can record your actions in the browser. This is a great way to learn the Selenium syntax as you can then export the recorded actions in several programming languages.
  • Selenium Remote Control (Selenium RC): A Java server that allows controlling browsers remotely. Can be used from any programming language using a simple HTTP API.
  • Selenium Grid: Basically the same as Selenium Remote Control but for running in parallel.

Personally I only use Selenium RC, so I won’t talk about the other parts. To get started, read the excellent Selenium documentation which explains everything a lot better than I could. If you want a quick start I recommend the Selenium RC chapter.

The basic test case in Selenium works like this:

  1. Open the page to test (with by first logging in)
  2. Execute some action like a click on a link or a mouse event
  3. Wait for the action to execute (Ajax loads to finish, etc.)
  4. Assert that the result is as expected

These four steps are repeated for every test case – so a lot of pages will be opened. Opening all those pages is the reason why Selenium tests tend to be very slow.

As a test framework you can use whatever you already have – in your favorite programming language. The difference is just that in your tests you will talk to Selenium to get the current state.

Take the following Python code as an example:

def test_login():
    sel = selenium("localhost", 4444, "*firefox", "http://localhost:5000/")
    sel.start()
    sel.open("/login")
    assert sel.is_text_present("Login")
    assert sel.is_element_present("username")
    assert sel.is_element_present("password")
    assert sel.get_value("username") == 'Username'

This script launches a Firefox browser and opens a login page of an application running on the localhost. It then gets several values from Selenium and asserts the correctness of these values using the standard test framework methods.

JsTestDriver

JsTestDriver is a relatively new tool which can be used to submit tests suites to browsers. Those browsers have to register with a Java-based server and you execute tests by submitting them to that same server.

So far that sounds very similar to Selenium. The difference is that JsTestDriver works with a blank page in which it directly inserts the test suite. It does that with a lot of optimizations to make sure the test runs are as fast as possible.

After that the unit test suite is run – directly inside the browser – and the client gets the test results including stack traces if available.

I recommend the Getting started documentation on the JsTestDriver wiki to see some code.

One of the main differences to Selenium is that you write the tests directly in JavaScript. There is a built-in test framework but you can also use other frameworks depending on your taste.

To show some code that you can contrast with the Selenium example above, consider this example:

GreeterTest = TestCase("GreeterTest");

GreeterTest.prototype.testGreet = function() {
  var greeter = new myapp.Greeter();
  assertEquals("Hello World!", greeter.greet("World"));
};

Differences

Selenium is a very magic piece of software. Everybody falls in love with it when seeing their browser doing all the work. It’s something people just can’t resist. And so they end up using Selenium for all their web testing needs. I’ve been there myself.

The downside of Selenium is that it’s very brittle and slow. This is something that can’t really be avoided because all it does is control a browser. Opening pages and waiting for all scripts to load takes some time. And doing that hundreds or thousands of times, as is easily the case in a large test suite, leads to very slow test executions.

So instead of falling into that trap and only using Selenium, I recommend to clearly separate out unit tests which you can then execute in JsTestDriver. JsTestDriver does a lot less work and because of that is a lot more stable and faster. Then do integration tests with Selenium to test some basic workflows.

As an example take an autocompletion widget which is used on a search home page. Almost everything the widget does you can test by just calling its public interfaces and seeing if it does the right thing. This includes all the strange edge cases such as handling network timeouts or invalid data. So this part you do with a big JsTestDriver test suite. Then you only need one small functional test case to make sure the widget is correctly embedded in your home page. That’s your Selenium test.

As is evident I’m very happy that JsTestDriver has come along. Before that the only good solution for JavaScript testing was Selenium – and as I explained above it’s not a perfect fit for every testing need.

Conclusion

If you have followed my testing tutorial so far you now have all your testing tools set up. Some more chapters will follow but those now cover testing philosophy and tools around the principal testing framework.

Jasmine Adapter for JsTestDriver

Save

Jasmine Adapter for JsTestDriver

Author

Requirements

Usage

Create, or update, a jstestdriver.conf file (see wiki page for more info).

Update your jstestdriver.conf by prepending the jasmine library and the adapter's source files.

For example:

load:
- "../jasmine/lib/jasmine-1.0.1.js"
- "../JasmineAdapter/src/*"
- "your_source_files.js"
- "your_test_files.js"

Copy server.sh and test.sh (included) to your working directory, for convenience.

# copy
cp /path/to/jasmine-jstestdriver-adapter/*.sh ./

First: run server.sh and supply -p, for port, and -j, path to jstestdriver.jar or follow the convention defined in the .sh scripts (see Caveats below).

Open up http://localhost:9876/capture (update for your port) in any browser.

Finally: run test.sh to test all tests (specs) included with the jstestdriver.conf. Optionally pass a -j and -t arguments to test.sh to set the path to jstestdriver.jar and any test you'd only like to run, respectively.

Directory Layout

  • src: The adapter source code. Intent is to match interface with interface.
  • src-test: The test files that verifies that the adapter works as intended.

Caveats

jsTestDriver.conf and *.sh files

The files located in this repo assume that the parent folder has the jasmine source and a jstestdriver compiled available.

Update the paths, or pass arguments (as explained above), to reflect your own layout if you'd like to test the adapter.

JSTD 1.3.2

This release has a known bug (232) with relative paths. Quick solution is to place the jasmine.js and JasmineAdapter.js inside of the absolute path. In other words, make sure you do not use ... Other options are to: use a 1.3.1.jar or compile a jar from the HEAD (trunk) of the JSTD repository.

Changes

  • 1.1 - 2011-04-06 Olmo refactors and clean code into a more encapsulated adapter.
  • 1.0 - 2010-12-14 Misko completely rewrites the adapter and is now a passthru for JSTD. Adds ddescribe and iit.
  • 0.5 - 2010-10-03 Chistoph has been improving the code and fixing bugs. Adds .sh files for simple run of server and client.
  • 0.2 - 2010-04-22 Misko fixes and refactors the adapter: beforeEach, afterEach, and nesting supported.
  • 0.1 - 2009-12-10 Olmo Initial release. Some support for beforeEach, afterEach, and matchers.

Jasmine: BDD for your JavaScript

Save
Fork me on GitHub

Jasmine is a behavior-driven development framework for testing your JavaScript code. It does not depend on any other JavaScript frameworks. It does not require a DOM. And it has a clean, obvious syntax so that you can easily write tests.

describe("Jasmine", function() {
  it("makes testing JavaScript awesome!", function() {
    expect(yourCode).toBeLotsBetter();
  });
});
Jasmine can be run anywhere you can execute JavaScript: a static web page, your continuous integration environment, or server-side environments like Node.js .
Find out more in the documentation .

Powered by   Pivotal Labs, Inc.

Introduction to JavaScript BDD testing with Jasmine library | Web side of life

Save
Posted: July 22nd, 2011 | Author: | Filed under: jasmine, javascript, testing | No Comments »

If we look at the developments in the software testing area, step beyond in evolution after TDD is, so called, BDD (Behavior Deriven Development). It is hard to describe BDD in couple of sentences. Excerpt taken from Wikipedia says:

BDD focuses on obtaining a clear understanding of desired software behaviour through discussion with stakeholders. It extends TDD by writing test cases in a natural language that non-programmers can read. Behavior-driven developers use their native language to describe the purpose and benefit of their code. This allows the developers to focus on why the code should be created, rather than the technical details, and minimizes translation between the technical language in which the code is written and the domain language spoken by the business, users, stakeholders, project management, etc.

If this looks a bit complicated, in one sentence it would mean: BDD helps programmers and non-technical stakeholders understand each other better and develop software which will be the most representative model of the problem solved by writing the software  itself. 

Jasmine is a library for BDD testing of your JavaScript. It offers range of possibilities to describe software functionalities by simple and descriptive tests, which is one of the basic principles of BDD based development. This way tests usually serve as the most accurate software documentation. More about Jasmine library on the GitHub repository.

After you download standalone Jasmine distribution from here you’ll find in the archive all you need to write your first tests together with the example of your code organization. The most important fie is SpecRunner.html which, when run in browser, will execute tests and give you report about the state of the tests. SpecRunner.html is very simple and self explanatory. lib folder contains Jasmine library, src folder contains code to be tested and  spec folder contains actual software specifications as tests.

Further, I will implement simple object that can register multiple callback functions and call them with given data. Something like simple observer. I will always first write tests and then implement functionality to make tests pass.

We start from the downloaded archive for standalone Jasmine library. We need to delete example code provided with the library to implement our own example.

cd jasmine-standalone # folder in which we extracted downloaded archive
rm src/*
rm spec/*

Also, from SpecRunner.html file we delete references to just removed scripts. Therefore, part which includes them stays empty like this:

...

  <!-- include source files here... -->

  <!-- include spec files here... -->

...

In TDD and BDD philosophy one rule is essential for success.

First you write the test which will at the beginning fail, then you implement code which should satisfy specification described by test. Then, you execute test and continue with implementation until test passes. When that happens you know that implementation is OK.

According to this we create file to place our tests in spec/ObserverSpec.js and we define first test to simply check existence of the observer object.

describe("rs.ji.observer", function () {

    it("should be present in the global namespace", function () {
        expect(rs.ji.observer).toBeDefined();
    });

});

And we add this specification to SpecRunner.html to execute it:

<!-- include spec files here... -->
<script type="text/javascript" src="spec/ObserverSpec.js"></script>

If we now open SpecRunner.html in some browser (i.e. Google Chrome) we will see one failing test.

It is important to take a look in how our test looks like. If you read the test you’ll see how descriptive it is and how with simple words expresses how our code should behave, although example is very simple. Here it takes some creativity of the developer to make tests really valuable but more than that is important that developer have deep understanding of the dictionary and terminology of the domain for which software is developed because understanding of the domain should be heavily embedded in the test definitions.

Next step is to implement the code which will satisfy test. We will create file src/observer.js

// defining so called namespace objects so I can namespace observer into
// rs.ji.observer
var rs = {};
rs.ji = {};

// observer implementation
rs.ji.observer = {};

And we will add it to SpecRunner.html:

<!-- include source files here... -->
<script type="text/javascript" src="src/observer.js"></script>

If we run SpecRunner.html again, we can see that our test passes, it is green :)

Now, when we know for the principle  “First failing test, then implementation” we can add new functionalities to our simple example. We add next test:

    it("should be able to register callback function", function () {
        // define function to be placed into the observer register
        function callback() { return null; }
        rs.ji.observer.register(callback);
        var callbacks = rs.ji.observer.getCallbacks();
        expect(callbacks.length).toBe(1);
        expect(callbacks[0]).toBe(callback);
    });

And implementation that makes this test pass:

rs.ji.observer = {
    callbacks: [],
    register: function (callback) {
        this.callbacks.push(callback);
    },
    getCallbacks: function () {
        return this.callbacks;
    }
};

We want to check if we can register multiple callbacks. This functionality should be supported already, but we write test to confirm this:

    // before every test we reset array of callbacks
    beforeEach(function () {
        rs.ji.observer.callbacks = [];
    });

    it("should be able to register multiple callbacks", function () {
        function callback1() { return null; }
        function callback2() { return null; }
        rs.ji.observer.register(callback1);
        rs.ji.observer.register(callback2);
        var callbacks = rs.ji.observer.getCallbacks();
        expect(callbacks.length).toBe(2);
        expect(callbacks[0]).toBe(callback1);
        expect(callbacks[1]).toBe(callback2);
    });

Programmers often use print, echo and friends to test code for certain functionalities. That takes time as for writing a test, but from other side tests are possible to be executed all the time after this initial check is done even when we move to implement something else.

If we run SpecRunner.html in browser, all tests should pass.

Last thing our observer should do to satisfy its basic purpose is to update callback functions with some new information. That means it should be able to call them with some data. To define this functionality we write new test:

    it("should provide update method to execute all callbacks with provided data", function () {
        window.callback1 = function () { return null; }
        window.callback2 = function () { return null; }
        spyOn(window, 'callback1');
        spyOn(window, 'callback2');
        rs.ji.observer.register(window.callback1);
        rs.ji.observer.register(window.callback2);
        rs.ji.observer.update("data string");
        expect(window.callback1).toHaveBeenCalledWith("data string");
        expect(window.callback2).toHaveBeenCalledWith("data string");
    });

Here we use some advanced techniques of spying on methods (SpyOn) in order to see if some method is called while other is executed.

As a helper, we will add jQuery library to src/jquery.js location and add it to SpecRunner.html file:

<!-- include source files here... -->
<script type="text/javascript" src="src/jquery.js"></script>
<script type="text/javascript" src="src/observer.js"></script>

And we implement code to satisfy the test:

// observer implementation
rs.ji.observer = {
    callbacks: [],
    register: function (callback) {
        this.callbacks.push(callback);
    },
    getCallbacks: function () {
        return this.callbacks;
    },

    // added update method to inform registered callbacks
    // uses jQuery to iterate over the array
    update: function(data) {
        $.each(this.callbacks, function (i, callback) {
            callback.call(null, data); // call every callback with provided data
        });
    }
};

That is it, now we have full implementation of simple observer with tests that can pretty clearly describe functionalities of the software we developed and they can as well prove in every time that needed functionalities are working OK by executing them in the browser (or different browsers to see if they will work in all of them).

Complete files:

describe("rs.ji.observer", function () {

    beforeEach(function () {
        rs.ji.observer.callbacks = [];
    });

    it("should be present in the global namespace", function () {
        expect(rs.ji.observer).toBeDefined();
    });

    it("should be able to register callback function", function () {
        // define anonymous function to be placed into the observer register
        function callback() { return null; }
        rs.ji.observer.register(callback);
        var callbacks = rs.ji.observer.getCallbacks();
        expect(callbacks.length).toBe(1);
        expect(callbacks[0]).toBe(callback); // so we need to have first funtion in registered callbacks
    });

    it("should be able to register multiple callbacks", function () {
        function callback1() { return null; }
        function callback2() { return null; }
        rs.ji.observer.register(callback1);
        rs.ji.observer.register(callback2);
        var callbacks = rs.ji.observer.getCallbacks();
        expect(callbacks.length).toBe(2);
        expect(callbacks[0]).toBe(callback1);
        expect(callbacks[1]).toBe(callback2);
    });

    it("should provide update method to execute all callbacks with provided data", function () {
        window.callback1 = function () { return null; }
        window.callback2 = function () { return null; }
        spyOn(window, 'callback1');
        spyOn(window, 'callback2');
        rs.ji.observer.register(window.callback1);
        rs.ji.observer.register(window.callback2);
        rs.ji.observer.update("data string");
        expect(window.callback1).toHaveBeenCalledWith("data string");
        expect(window.callback2).toHaveBeenCalledWith("data string");
    });
});
// defining so called namespace objects so I can namespace observer into
// rs.ji.observer
var rs = {};
rs.ji = {};

// observer implementation
rs.ji.observer = {
    callbacks: [],
    register: function (callback) {
        this.callbacks.push(callback);
    },
    getCallbacks: function () {
        return this.callbacks;
    },
    update: function(data) {
        $.each(this.callbacks, function (i, callback) {
            callback.call(null, data);
        });
    }

};

An experiment in A/B Testing my Résumé

Save

An experiment in A/B Testing my Résumé

Objective

I’ll admit it: my résumé doesn’t stand out. I’ve had some great internships, but also a tendency to work for companies that aren’t (yet!) household names. And though I’m doing fine academically, it’s not well enough to stand out on my marks alone.

On the other hand, my blog lets me stand out. I’ve had a few opportunities to meet and interview with some great people and companies because they read my blog. Naturally, then, the primary goal of my résumé is to get people to visit my blog. Since I don’t quite have the audacity to make my résumé a note telling people to visit my blog, I’m faced with the problem of how to optimize my résumé to ensure people see my blog. That’s where this experiment comes in.

Methodology

I started thinking about variables in my résumé that could affect the rate at which people viewed my blog. I narrowed it down to three that I could easily test.

The first is the length of the résumé. My friends Rajesh and Shams are adamant about keeping their résumés down to a single page. Their arguments are sound, but I wanted to see if the data would back up their beliefs. I created a “short” version of my résumé which I squeezed into one page by omitting the Awards section and removing some skills.

Second, I wanted to know how my grades affected the résumé. Obviously I couldn’t start making things up, but since my major average differs from my overall average by a good margin, I had two numbers that I could use truthfully with a subtle change in wording.

Résumé variations with different grades


Finally, I wanted to test whether it pays to include social media links on the résumé. I chose GitHub, LinkedIn, and twitter as the links to test. GitHub was an obvious choice because it emphasizes my free-time projects. LinkedIn seemed like a good one to test, given that it is for professional networking. I chose twitter as another variation because I was curious to see what the reaction to a more personal social networking site would be. All résumés linked back to my blog as well. Finally, I had another resume which linked only to my blog, as a control group.

Résumé variation with a link to GitHub


In all, these three variations resulted in 16 unique résumés. Fortunately I didn’t have to create them all by hand. I was already using LaTeX for my résumé, using one of the elegantly typeset templates from The CV Inn as a base. I simply threw my latest résumé into a Mako Template and wrote some python code to spit out the 16 possible variations of the LaTeX code. Then I used pdflatex to create pdf files. Since I was putting the résumés online, I made a landing page. To keep things simple, the landing page was just an image version of the résumé with a link to download the pdf, and just enough CSS to look presentable.

I wanted to track three things: how many people downloaded the résumé, how many people scrolled to the bottom of the landing page, and how many people visited my blog. The first I accomplished by logging downloads. The second I accomplished with jQuery and an Ajax callback. The third I accomplished with a tracking image, just like hit counters in the 90s. I used IP address and cookies to match up actions with the associated résumé.

The only remaining problem was how to get hundreds of people to see my résumé in a short period of time. Fortunately Google offered me $110 in AdWords credits as a Google Analytics user, so I took advantage of that and ran ads on Google searches. Here is one of the half-dozen variations I ran:

One of the Google ads I ran

Results

After less than a week, I managed to exhaust my AdWords budget and gather a fair bit of data. I wrote a few hadoop jobs with my new toy and then brought the data into R for analysis and visualization.

Length

As you might expect, the people who encountered the short resume were much more likely to scroll to the bottom. Just over half did, versus just under a third of those presented with the long resume. This makes sense because there is less to scroll through, but it was nice to have the data confirm my suspicions. Note that in the following graph, and all others in this post, the grey lines indicate the 90% confidence interval.


The short résumé also resulted in more downloads and blog views, but not enough to be statistically significant with the amount of data I collected.

Grades

The grades shown on the résumé didn’t affect any of the metrics I was measuring in a statistically significant way.

Links

I was surprised to find that the non-blog link shown on my résumé affected the frequency of click-throughs to my blog. Even adding a link to my GitHub profile more than halved the frequency of a clickthrough to my blog. LinkedIn and twitter were even worse.


I created a heatmap-like visualization from the relative significances of each link to each other. For example, the upper leftmost cell means that it is 97.2% likely that if a sufficiently large group of people were exposed to each of the LinkedIn and blog-link-only versions of my résumé, the group that saw the blog-link-only version would visit my blog more. Jesse E. Farmer has written more about the details of how this is calculated.


Oddly, the effect was reversed when you consider downloads rather than blog views. The résumés without any social media links were far less likely to be downloaded than those with. Even a résumé with a twitter profile did better than one without, though not by enough to be statistically significant.



The additional links also reduced the frequency of readers scrolling to the bottom of the page.


Conclusion

There are two main things I learned from this experiment. First, I’m going to keep social network links off of my résumé. Although they increased the download rate, they decreased visits to my blog. Since the latter is my priority, I’m not going to start adding social networks to my résumé.

Second, the short résumé did better in every way. However, the improvement in blog views was not statistically significant. For now, I’m keeping my online résumé at two pages, but I will use the one-page version in print.

There’s a number of disclaimers I should make here. For one, even if my findings are true of my résumé, they might not be true of other résumés. Maybe a change in layout would diminish the effect of linking to social media profiles, or make the longer résumé convert better. I should also point out that I have no way of knowing who my audience was. They probably weren’t all in a position to hire a programmer or data scientist, so the factors that make them visit my blog may or may not have the same effect on those who are.

And finally, a shameless plug. I’m looking for an interesting data science internship this fall (September to December 2010). If you’re doing cool things with data, I’d be glad to hear from you. My contact information is in the sidebar.

This entry was posted in Data Mining, Math, R, Statistics. Bookmark the permalink.

Tellurium / AOST

Save
The Tellurium Automated Testing Framework (Tellurium), is a test framework built on top of the Selenium test framework and it abstracts UI components to Java objects
(1 - 10 of 16)