Blog
đź“· Toy. Day 1 of the April 2024 Micro.blog Photoblogging Challenge.
Hypothes.is to WordPress
For quite some time, I have admired that way that Chris Aldrich has built his WordPress website with the aim of posting all of his writing and other content to his own website. One of the most interesting features of his site is how he has incorporated his use of Hypothes.is, the free and open-source annotation tool, into his WordPress site. As someone who also uses Hypothes.is for casual and professional reading and within my teaching, I am trying to see if I can accomplish something similar.
David Shanske helped out by referring me to a Github Gist that registers custom post kinds outside of the Post Kinds plugin directory. This will allow me to retain the custom post kind even when Post Kinds is updated. I made a fork of David’s Gist with some changes to define it as kind related to annotation.
Icon Support
Using Chris’s instructions, I was able to include an SVG icon that will display on my posts and within the Post Kinds metabox editor. Be sure to select the settings for “icon” or “icon and text” so that the SVG icon will display. I used the highligher icon with appropriate permissions from Font Awesome’s Github collection.
Appearance
As a next step, I would like to customize the appearance of the kind. As a starting point, it might be good to try out the various types of Post Formats that might work well. I am using the bookmark
format at the moment.
Resources
- Chris Aldrich’s Annotation posts on his WordPress site.
- Related posts on using Hypothes.is, WordPress, IFTTT, and other services:
- Using IFTTT to syndicate (PESOS) content from social services to WordPress using Micropub | Chris Aldrich
- An Outline for Using Hypothesis for Owning your Annotations and Highlights | Chris Aldrich
- Manually adding a new post kind to the Post Kinds Plugin for WordPress | Chris Aldrich
- Hypothes.is annotations to WordPress via RSS | Chris Aldrich
Â
Video(less) Games, Sonification, and Accessibility
Sonification is the use of non-speech sound in an intentional, systematic way to represent information (Walker & Nees, 2011).
Fascinating Twenty Thousand Hertz podcast, Video(less) Games, in which options for games composed mostly or entirely of sound are described. Gamers and developers discuss their motivations for contributing and the experience of play. At about 15:09, you hear about how Steve Saylor, a blind video gamer and game accessibility consultant, describes how he developed a rich series of audio cues that can be enabled. These cues tell players about environmental features and action in the game. Listen to hear the experience of the audio layer on and off.
Games composed mostly or entirely of sound are not new–the Twenty Thousand Hertz episode describes a text adventure game called Zork II that utilized a text-to-speech engine in the early 1980s. But the idea of developing a convention for audio cues within a game or even across multiple games, reminded me of the sonification of math equations I first saw in the Complex Images for All Learners accessibility guide from Portland Community College. The DIAGRAM Center has a wonderful article on sonification with audio examples that can be played back at different speeds. Sonification is also not new. But the provision of multimodal data representations does not seem to be widespread in higher education, at least not that I have seen.
Similar technologies are also being piloted in the realm of traditional sports, such as tennis. The New York Times published a story by Amanda Morris describing aa new technology called Action Audio that aims to make sports accessible to people with blindness or low vision. Action Audio converts data–such as data from the 10-12 cameras on an Australian Open tennis court–into 3-D sound in less than a second, allowing that audio to be broadcast alongside live radio commentary. You can hear an Action Audio sample of an Australian Open tennis match. To get the full benefit, the use of speakers or headphones with both left and right channels is ideal.
These innovations make me think about the materials that I create or make available to my students. What would educators need to know to become proficient in the use, evaluation, and creation of multimodal data representations? In the case of sonification, it might take an educator knowing where to find high-quality sonifications that had already been created. It might require training in how to produce and design sonifications. In terms of design, how can our existing base of research and theory help guide our decisions? These are fascinating questions that I would like to explore more thoroughly and bring back to the courses I teach.