As you may have seen here before VA passed a bill saying universities can’t have student email visible without written consent from the student. We had a legacy site that had student emails as part of the title structure for the posts. We had stuff titled stuff like This is email@example.com post or This is firstname.lastname@example.org. We want to remove a chunk so it’s not an obvious email any longer. The following function cleans up our two different email patterns so that the titles no longer include emails. It does this by tying a custom function to a WordPress filter. The first thing we do is get the post type. In this case we just want a custom post type we named profile. $type = get_post_type( get_the_ID( ) ); should get us that variable. Next, we’re defining the things we’d like to find in the title. $find = array(‘@mymail.vcu.edu’, ‘@vcu.edu’); – this is letting us look for multiple items and it could be many more than two. Now our if loop only runs if the $type equals ‘profile’ otherwise it just passes the $title variable right back unharmed. If it is a profile custom post type, it uses PHP’s built in str_replace function to look for the things we defined in $find, replaces them with our second variable (” in […]
We needed to put a data privacy footer link on all our rampages sites. To do that I added this code in our generic network activated plugin. Then we realized we’d need to skip that occasionally for particular sites and that’s why we ended up adding a loop to skip sites by ID. It could be fancier and enqueueI cannot spell this word. scripts etc. rather than just stapling them in but the pattern’s likely to be useful to others wandering in the darkness.
We had a list of rampages sites in a Google Spreadsheet and wanted to know when they were created. I started to look that up but only managed to do it twice before I gave up and went in search of another way. In this case it took two little bits of code. This first piece is active on our generic site-wide plugin. It adds the blog’s creation date, last updated, and post count to the base JSON data. That’ll be handy in the future if we want to checkup on sites with only one query rather than multiple queries. This second piece is a Google Script that makes a function that I can call in the sheet by typing =getCreationDate(“http://someurl.com/”) The two together answer my immediate problem but the JSON modifications have some long-term value for us and might be useful to someone else.
If you tuned in about half an hour ago, you’d have seen how we’re triggering channel creation in Slack based on a custom post type getting published. One of the other tricks we wanted to happen as a result of that was the creation of a Google Folder. There are a variety of ways to play this but some of the easier ones would require some options we have blocked on our VCU accounts. I could have gone around that via a personal account and then subsequent sharing but it seemed like it’d be more fun to do it this way. I knew I could trigger script events based on form submissions and that I could use the data in the form as variables as well. I also knew I could fill out form variables via URL parameters. What I didn’t know was whether I could submit a Google Form without actually hitting submit. Turns out you can. Take your normal form URL. https://docs.google.com/a/vcu.edu/forms/d/e/1FAIpQLScK2wgma6Oicv_ZY9i-6tg_w9RfEKKkgiAFJDw15jJnmr5ofQ/viewform?entry.1431785794 You can get one of the pre-filled URL patterns like so . . . Which gives you a URL like this. You can see my pre-filled response ‘fish tank’ at the end of the url. https://docs.google.com/forms/d/e/1FAIpQLScK2wgma6Oicv_ZY9i-6tg_w9RfEKKkgiAFJDw15jJnmr5ofQ/viewform?usp=pp_url&entry.1431785794=fish+tank Now to make it auto submit ‘fish tank’ you have to change one piece and add an element at the […]
Image from page 249 of “The development of the chick; an introduction to embryology” (1919) flickr photo by Internet Archive Book Images shared with no copyright restriction (Flickr Commons) I ended up doing this while pursuing some of the API integration stuff for our projects page. It doesn’t list the private pages and might be useful to someone. This was the byproduct of looking for a way to look up the ID for a particular channel which ended up looking like this.
Image from page 211 of “Bulletin” (1961-1962) flickr photo by Internet Archive Book Images shared with no copyright restriction (Flickr Commons) I’ve been lucky enough to hire two awesome people who have started over the last month or so.1 We’re also going to get a new supervisor on July 3rd. That’s led me to have a bit of breathing room and a reason to start re-thinking some things. One of those things is how we combine documenting our work. Can we document what we do in a way that will create more people interested in doing these things? Can we do a much better job bringing active faculty to the forefront? Can we serve the end of the year report needs regarding various data elements? Can we gather data we might reflect on regarding our own processes? How do we knit all this stuff together from various services without a lot of extra work? The Old I’ve done this more than a few times. The latest incarnation at VCU was the examples page (pictured above). It is semi-decent but was done in haste. It tries to affiliate tools and instructional concepts with examples. Conceptually, it’s pretty close to TPACK in that way. It has done a marginal job thus far. It houses examples and people can browse them. It doesn’t […]
Back when Instagram’s API rules didn’t completely suck, I wrote a few posts on scraping it so that some of our faculty could use those data in their research. Then all their rules changed and everything broke. That’s their prerogative but it’s also my option to complain about it. But because I posted about it, I got a comment from raiym1 who let me know he wrote a PHP scraper that avoided the API limitations. I’ve now got that up and running and set up a simple GET so that the URL determines the tagged content that is returned. The PHP for that page is below and allows you to replace the API URL in the old Google Scripts with a new url like http://bionicteaching.com/creations/ig/scrape.php?tag=fish You can then make your own custom displays based on that. I made a quick custom page template for the artfulness WP theme (currently showing filler data from the exciting ‘fish’ tag). This example has the tag hardcoded in but could easily use a custom field to pass the value. 1 On this post. And apparently this theme doesn’t support direct links to comments. About time I wrote my own theme . . .
flickr photo shared by San Diego Air & Space Museum Archives with no copyright restriction (Flickr Commons) Minor Thoughts on Computational Thinking Probably obvious stuff but I’m trying to jot things down for my own reference. The first thing one ought to know about computational thinking/programming is that there are many correct paths (although some are better1 than others). This is true for just about anything but I think people think technology will be much more . . . binary. Searching for cleaner paths can be kind of fun. Computational thinking is powered by vocabulary. Vocabulary, like in language, is closely tied to concepts (maybe analogies). Having never heard of the range function, it didn’t occur to me that it existed . . . let alone that I should use it. To make it work properly I need grammar but just knowing the word exists and means something starts to change things for me. It brings to mind setting up programming challenges much more like Dan Meyer’s 3 Act math lessons . . . with the scenario really begging for the addition of a particular concept but letting students struggle with it rather than providing it ahead of time. A Path This is a little bit of real-life progression which demonstrates how one thing can be done in a variety […]
flickr photo shared by goosmurf under a Creative Commons ( BY ) license This one will be improving considerably in the near future but given I’ve just been talking to many interesting people about APIs, reclaiming various things, and Indie-Ed Tech1 I figured I’d get it out early and that’d force me to follow up. Nothing like ugly betas to drive development. It’s also a chance to test my new blog to Twitter system as I disentangle myself from IFTTT. Nothing kills momentum like not doing something . . . So this script currently works on public photos. You’ll need a file named imgs and a file named data.json. This thing should chew through all your photos and download the original size image to the folder. It’ll also make a giant json folder with the image title, any lat/long coordinates, tags (not machine entered though), and the photo date. I will warn you that I’ve only run it on 100 photos so far. I’ll give the full thing a shot once I get things setup to put it on S3. 1 Count down to book . . .