Still need help?
Get in touch with the LX.lab team by logging a ticket via ServiceConnect. We'll be in touch shortly.
Log a ticketWant to provide feedback on this resource? Please log in first via the top nav menu.
After gathering and interpreting data, the next step in the HCD process is to use the information you have collected to develop a response or prototype.
While creating a response to a problem and evaluation are often seen as separate processes (with an evaluation step often being an afterthought), these two actions are tightly intertwined in the HCD process. A prototype is a test, and thus the nature of the prototype and the shape it takes is largely defined by what you are trying to evaluate.
Giving this step the title of ‘Develop prototype and evaluate’ gives it a formal quality, but it could equally be as small scale as deciding to run an in-class activity in a slightly different way and then watching to see if the change has the impact you wanted. The crux of this step is actually trying a real thing and then being conscious of what happens when the users engage with it.
Note that we use the term response here as opposed to ‘solution’. Not all problems are solvable, or solvable within the remit you have, so it’s better to think of an action as a response rather than a solution. A response could ultimately be a solution, but it shouldn’t be embarrassed if it is not.
Design methodologies have a large segment of process dedicated to ideating and refinement of response ideas. In the learning and teaching space – if you have gained an awareness of the core issue – solution avenues will likely present themselves, so in this case the better thing to do is just make something (a prototype) and try it with users.
The responses you can consider will of course be dependant on a range of features:
Evaluation can really be outlined by two key questions: what is the positive outcome you are looking for and what does/would this look like in concrete terms?
While it can be tempting to say that your goal is ‘better student engagement’, for the sake of a prototype evaluation this needs to be lined up with something measurable as part of the experience/prototype you are building. Is ‘more engagement’ signified by more students filling in weekly quizzes or answering more questions as part of class discussions? If you are saying ‘more’ do you have data from a previous experience that you can compare?
Often forcing ourselves to consider evaluation in this way draws attention to how much we can actually test or know about a user experience. In practical terms, an evaluation plan is just a written list of things you are hoping to see in response to an action you are going to take.
A few things to keep in mind:
Prototype design can also be summarised into a question: what can you do/create that users can interact with which will allow you to gain the most information for the least effort?
Basic prototypes (diagrams, pieces of paper in place of digital systems, role play) can be used to verify that the key/dependant elements of something can work before investing more energy designing other elements around it. This means that you could use these methods to either trial elements of something you want to test before you run the prototype with students, or these elements could be the prototype you run with students.
The form of the prototype you make is dependant on the particular aspect of the experience or object that you want to test. Consider the evaluation marker you are looking for and determine which parts are the most relevant to include in relation to it. For example, if you are testing how learners might respond to interacting with one another through a shared document and you are looking for signs of connection, the visual appearance of the document is going to have less of a bearing so this is perhaps not a part of the prototype that needs to be refined.
A prototype balances accuracy of discovery, with effort: the closer the prototype is to the final, the more you will learn, but the more energy you have to invest to get that information. There is a balance in the middle depending on your needs.
How can you scale it?:
User evaluation is the act of getting your users to interact with something you have created and being conscious of their response.
The most basic form of this is just being aware when you get people to use your prototype. What do the users say and do? Can you see markers that match with your defined points of evaluation? Writing a few comments on what you noticed can help you interpret and decide on the refinements you might like to make for the next iteration.
A more in-depth method is going through more formal opinion gathering from users. Like with the information gathering stage above, you can use similar methods (observation, direct questioning through questions/surveys) and you will need to balance bias (as was discussed in the Creating user conscious questions resource).
Get in touch with the LX.lab team by logging a ticket via ServiceConnect. We'll be in touch shortly.
Log a ticketWant to provide feedback on this resource? Please log in first via the top nav menu.