Google Script CV Workflow POC

flickr photo shared by The National Archives UK with no copyright restriction (Flickr Commons) Suppose you wanted to automate a chunk of your CV creation. Suppose they’d let you do it digitally via Google Docs (if not aspects of this could still work but it wouldn’t be nearly as interesting) and that you’d like to link to the “proof” files. I am further supposing that you might be willing to think about doing this slightly differently. Usually people build the CV/tenure document and then go back and find/link to their evidence. The path I’m suggesting would allow you to gather the evidence as you came across it and then build the index to it automatically. You’ll still want to construct the overarching narrative but this takes the grunt work of listing/linking and puts it on the computer where it belongs.1 This is the proof of concept scenario. You could make it much better depending on your needs/wants but this ought to get you started with how it could work. This script does create a spreadsheet of all your content with a variety of useful links and creates a Google document with all the files as ordered list items under their respective folder headings.2 Given one folder called CV POC . . . in that folder are your three folders of […]

Auto-Logging Email via Google Script

flickr photo shared by OSU Special Collections & Archives : Commons with no copyright restriction (Flickr Commons) A while back I was logging emails in a Google sheet via IFTTT. I’d add a hashtag and forward it on where a spreadsheet would parse out some stuff from the subject line based on the | character. At some point it stopped working and I never quite figured out what the issue was. I thought I wrote about it but, if I did, I can’t find it. It may be in one of my many draft posts. In any case, here’s a better and more customizable solution. It’ll grab anything I label ‘support’ and throw it in a spreadsheet with a few different chunks of information in different columns (to, from, date, subject line, link to the email). You’d open a spreadsheet and name a sheet ‘data.’ Open up the script editor (Tools>Script Editor) and put the script below in it. You might want to change the search parameters. Look at the stuff below the asterisk line. If you want to play around with the right search parameters, just practice in GMail with these parameters and then use them in the query variable below. You will want to set the trigger to run at 1-2AM each day. So now, all I have […]

Grabbing JSON

flickr photo shared by Library Company of Philadelphia with no copyright restriction (Flickr Commons) What I wanted to do was grab data from the WordPress API and use that to provision chunks of my new portfolio site. The portfolio is hosted on GitHub and GitHub is HTTPS. At the moment my bionicteaching site is not HTTPS.1 That causes problems as secure and insecure are not friends. I wanted a quick and easy solution so I could continue until I do the HTTPS switch. The following is how I wandered towards a solution. A number of the things worked but don’t quite work for what I wanted. So they’re worth remembering/documenting for later and it’s kind of fun to see a mix of javascript, php, url manipulation, Google API, and the WordPress V2 API all in one little bit of wandering. My first thought was to grab the JSON via a Google Script and store it in Google Drive. I can do that but can’t seem to make it available for use the way I want. I tried messing with various URL parameters but it wasn’t flowing and I only started there because I thought it would be easy. I did eventually get the file accessible in DropBox (the only other place I could think of immediately for https file storage) […]

Scraping with Google Spreadsheets Across Instagram, Flickr, YouTube etc.

I remain kind of amazed with how many little tricks can be done with Google Sheets. After seeing Alan’s post today, I wonder how much of the data I could pull (assuming we had the right user names and knew the services . . . really the harder part) just using Google Sheets. Turns out we could get a pretty good amount. The following is a mix of XPath, regex, and APIs. I started with as little real programming as possible and gradually increased sophistication. The following are just meant to get a rough idea of how much stuff you’ve got in the various spaces. Flickr The URL: http://flickr.com/photos/bionicteaching The function: =IMPORTXML(C2,”//*[@class=’photo-count’]”) This uses a basic Google Sheets function to grab the photo-count content. The function is grabbing the div class with the title photo-count. Vimeo The URL: http://vimeo.com/twwoodward The function: =INDEX(IMPORTXML(C3,”//*[@class=’stat_list_count’]”),1) Pretty similar to the example above but with the addition of INDEX. That solves the problem that there are multiple items that are all in the stat_list_count class and we only want the first matching item. Sound Cloud The URL: http://soundcloud.com/cogdog The function: =REGEXEXTRACT(IMPORTXML(C4,”//*[@name=’description’]/@content”),”([0-9]+) Tracks”) This gets a bit fancier. IMPORTXML brings in a large chunk of content from the page but it wasn’t structured in a way that I could get the exact information I wanted. REGEX […]

22

A Bit More on the Personal API

Keep trying to growths “personal API” stuff, but it feels like a strained replacement for “organization”. https://t.co/A6i2HUF44c — Area Man (@xwordy) April 19, 2016 The tweet above and Alan’s comment on the post (below) and figured I haven’t really made a chunk of why I’m doing this clear or even what I’m doing clear. I’m probably a mix of more-middle-of-the-road and ambitious than I’ve been able to articulate so far. I declare no holy war. This is more a journey of self-improvement but I’m hoping the destination will be far more interesting than Chicken Soup for the Soul. I like the idea of establishing some sort of importance/urgency level to your list, but to me, it’s a bit binary (reclaim or “let it burn”). I still maintain there’s a fair bit of room in the middle ground. When Boone Gorges and D’Arcy Norman did their aggressive acts of Reclaiming a few years back, my thought was “That’s impressive” as well as “That looks like a lot of work”. See, I would rather take, edit, and share my photos than maintain my own flickr wanna be in WordPress or whatever. And there is the loss of potential social interaction you give up when you do a total reclaim, as happened when people went to Trovebox. I am content to store 44,000+ […]

Personal API: Progress in Pursuit of Nirvana

I’m going to give periodic updates on the personal API journey as way to make myself accountable and document progress. As Kin Lane reminded me this is a journey and so I’ve decided there are strange parallels between my API/Reclaiming-my-content work and the path to enlightenment.1 Like a Buddhist with very low expectations, I seek an end to (platform-related) suffering and rebirths. I am attempting to extinguish the fires of- ignorance – I don’t know exactly where all my stuff is or the rules governing it/me or what I’m “paying” for the service. short-sightendness – I’ve put work/energy/content in places without enough/any thought about the future. acceptance – I’ve accepted sub-par experiences, oppressive EULAs There may be a fourth flame to extinguish around isolationism (not taking advantage of the connectedness of all things API) but I’ve probably butchered Buddhism enough for one post. Since our last installment I’ve migrated from Bluehost to Reclaim. People might claim that’s a move from a vendor to another vendor. I disagree. Reclaim is both people I know and love and a company focused on the things I care about. Their goal is not entirely profit driven. I have no problem with people making money but I do have a problem with profit being the only driving force. It was a seamless move I put […]

05

First Steps in the Personal API

  The first step in starting to consider your personal API is figuring out where your stuff is now. This has been an interesting experiment for me as I’ve flung stuff all around the Internet with very little concern for long-term considerations. Where is my stuff? I’m trying to think about all the places I’ve put work and/or media I care about. I’m also trying to group all of it in some sort of organized fashion. I thought it’d make sense to think big picture and work my way down. Domains/Servers bionicteaching.com on bluehost until I can do the reclaim migration mainly the blog but lots of random files as well- no real idea what’s on here tomwoodward.us on bluehost until I can do the reclaim migration rampages.us (work) – on reclaim, code stuff is mostly on github but content is in the wind augmenting.me (work) – on media temple, code stuff is mostly on github, maybe greatvcubikerace.net (work) – limited, no idea if I’ve got this on github teachers.henrico.k12.va.us – (old work) not sure it’s salvageable in time  (lost to the monsters?) Google Docs bionicteaching – 5GB vcu- work – 11GB montessori – work henrico – work (lost to the monsters – I document this as reminder of how much stuff can be lost when you change jobs- remember changing ownership across google […]

Allowing Cross Origin Access to WordPress Feeds

flickr photo shared by The U.S. National Archives with no copyright restriction (Flickr Commons) I believe this is safe but I’m no security expert. Every thing I could find on XSS issues was focused on stealing passwords. WordPress feeds are all public and require no login so I think it’s all good. StackOverflow seems to agree. With that hearty and confidence-inspiring endorsement, I give you this amazingly complicated plugin to allow access to all your WordPress feeds from other stuff (like Kin’s github rss reader)1 All simple stuff really, the key piece was getting the right trigger pre_get_posts. Otherwise it was called too late. is_feed is the other little handy piece which Tim Owens mentioned . . . and I subsequently used. 1 See how my site says success and Jim’s says failed? It’s only partially because he abandoned our country for Italy. It’s also because he doesn’t have this plugin turned on.

Discography to WordPress

flickr photo shared by Thomas Hawk under a Creative Commons ( BY-NC ) license This is in response to something Adam Croom wrote two(?) days ago. I thought it’d be an interesting proof of concept and would let me figure out some things with a purpose. I also like to have a few projects going on at once so I have things to switch between when I get frustrated. I also see this kind of information pushing/pulling as broadly applicable. Some of this stuff is no doubt uglier than it had to be but I’ll try to show some intersections that happened to occur with other projects and how certain steps might be ignored entirely if you want to be all efficient and stuff. The final plugin is here and should be a decent start to any customized import you want to run against a CSV file. Adam had information in Discogs. He wanted that information in WordPress where he could control it. I had never heard of the site, let alone seen its API. But it well documented and it took me a few minutes to realize I could get all the data I needed without even needing to authenticate. The user data was associated with collections and appending 0 would get me the root level stuff. With Adam’s […]

Shifting out of IFTTT

Kin Lane mentioned that IFTTT, a service entirely built on APIs, doesn’t have an API. That bothered Kin and the more I thought about it it bothered me. So I figured I’d start disentangling myself from IFTTT. One of the things I did with IFTTT was to send out a Tweet any time I posted something new on my blog. Crazy to think I set that up in 2012. Granted, I could have replaced this with any number of plugins but I thought this would be fun and bit of API work but most interestingly it’d put me (mostly) in charge of how the tool worked. The following script is just cobbled together from something I found to get an RSS feed into a spreadsheet and a script I used a while back to send a tweet from a Google SS. Next steps will be to start playing with adding amusing variables to the message. The first message kicked through with a minor error but progress! Grabbing Flickr Photos was blogged & can be found athttp://bionicteaching.com/grabbing-flickr-photos/ — Tom Woodward (@twoodwar) March 20, 2016