Community- Technically Speaking


Playing “Mah-Jong” at the Clubhouse of the Century Village Retirement Community. flickr photo by The U.S. National Archives shared with no copyright restriction (Flickr Commons)

Marie has nice post summarizing the Georgetown Community presentation at Domains. And nowEvelyn’s post reminded me to write a post on a site instead of just in my head.

The title of the presentation ‘Just a Community Organizer’ is a nod to the fact that community is hard to do. It can be hard technically but it’s often even more difficult on the human side.

As Evelyn brought up . . . community is not created by the technical ability to bring content together. There are lots of ways this can succeed technically but fail socially–> The stuff is there but no one cares.

At the same time, technology failures can prevent community from forming where you have all the other factors–> People want to see what’s going on but can’t find and interact with the stuff they want in reasonable ways.

There’s also the idea that people might not know what they want to see (or how they want to see it) until it’s given as an option or scaffolded into as an action. Can we present content in ways that are novel and interesting that inspires curiosity and interaction? You can’t do that easily

There’ve been many attempts to build online community between individual sites1 throughout Internet history– web-rings being an early way of associating groups of like-minded individuals to create community (and viewers), various portals, but most(?) successful communities have occurred on single sites- places with much more standardized content in an entirely standardized platform (think MySpace, Tumblr, Facebook, Twitter etc.). Even Geocities attempted to group things by topic. We’ve moved towards expectations of trending hashtags or ‘on this day 1 year ago’ reconnections with associated ways to like/heart/repost/comment. That impacts what people expect from community sites and from content sites. These educationally focused community sites are both.

That moves our focus to the content. Are the things your students are making actually things people want to see? If your students hate to do it, your faculty hates to read it, it’d be crazy to think that other students would want to go comment on work they hate don’t care about.2 Mike Wesch recently got 600,000 views on assignment six in the ANTH101 course. Not that 600,000 is the goal but some audience is. Who cares about this work?

The complex thing about the content/practice/technical entwinement is that all of them impact one another and it will take a chunk of energy to get things flowing. It might take a smaller degree of energy to keep it flowing. In any case, it’s unlikely to just happen. It’s unlikely to happen without stewardship and effort, consideration and revision. There are no perpetual motion machines.3

Multiple Goals and Points of Friction

The thing about most community sites is they usually have multiple purposes. There is the desire to aggregate content to look at the program as a whole (often for accreditation purposes). There are promotional4 aspirations where showing top content might influence students choosing to enter the program, might influence how students in the program consider their own work, might influence how faculty consider the work in their own classes. There are additional considerations around communities of learners and how spaces like this might make, expand, and deepen the connections between them.

Often one site is made to do some of all that but in lack of focus it usually doesn’t do any of it very well.

The other piece that bogs stuff like this down is friction. You must figure out how people do things and decide on the smallest changes possible. It would be best if this results in nearly immediate positive reinforcement. This plays into things like identifying content you want to highlight. Think about how you read content, how other faculty read content. Figure out smooth workflows. You may need a few. What is the absolute least work people can do and how close to what they already do can you make the process?

Structured Data – Love All/Serve All

So now that I’ve rambled for way too long, let’s see how technology might serve you in a variety of ways. The main thing it does is either reduce/eliminate friction or allow for different views of the same information based on need.

The Site Browser

It’s easy enough usually to get a list of site URLs. If they’re running WordPress on reasonably recent vintage we can grab a bunch of info via the API. In the Google Script to Google Sheets scenario, you’d run something like this.

function getData(url, row) {
 
  if (checkResponse(url)){
    var response = UrlFetchApp.fetch(url+'/wp-json/'); // get feed by API
    var json = response.getContentText();   
  }  
  
  if (isJsonString(json)){
    var data = JSON.parse(json); //parse data   
  }
  
  var ss = SpreadsheetApp.getActiveSpreadsheet(); //get various spreadsheet things
  var sheet = ss.getActiveSheet();
  var blogInfo = [];
  
  if (checkName(data)){
    var theName = data.name; //get the site name
    var theDescrip = data.description; //get the description
    var dt = new Date(); //set date
    var utcDate = dt.toUTCString();
    
    blogInfo.push(theName); //push all this into an array for the row
    blogInfo.push(theDescrip);
    blogInfo.push(utcDate);
    blogInfo.push(dt);
  }else{
    var theName = 'error';
  }
    var range = sheet.getRange('E'+ row + ':H' + row).setValues([blogInfo]); //set the data to the cells 
}


//check dates for updates

function checkFresh(date){
  var now = new Date();
  var utcDate = dt.toUTCString();
  if (date && utcDate){
   return true; 
  }
}


//error catchers
function isJsonString(str) {
    try {
        JSON.parse(str);
    } catch (e) {
        return false;
    }
    return true;
}

function checkResponse(url){
  try {
     UrlFetchApp.fetch(url+'/wp-json/');
  } catch (e) {
    return false;
  }
  return true;
}

function checkName(data){
  try {
     data.name;
  } catch (e) {
    return false;
  }
  return true;
}

function today (){
    var today = Date.now();
    today = today.toDateString();
  Logger.log(today);
}

We can then set that stuff to update every X number of days and keep an up to date list of sites with additional information. We could also do other tricks here like checking for the most recent post and doing something to indicate that it only has the ‘Hello World’ post. That’d let you screen out sites that were just trials. You can also take a closer look at some of the stuff John set up here. They check for abandoned sites. I’ve also got a few that look at the XML feeds of Dokuwiki and Mediawiki so there’s the potential to do very similar things across a number of platforms.

Another big hassle was taking screenshots. I’ve always found that to be a pain. It’s certainly a point of friction for anyone doing this at any scale and it’s a hassle to keep the screenshots updated. After a great deal of looking around over the years I’ve found very few solutions and no great solutions. I’m currently using phantomjs to do it. Essentially I can set up a script and then run a cron task against it. The script looks is below and it saves screenshots of the URLs in an array and saves them by the URL in a particular directory.

//from http://j4n.co/blog/batch-rendering-screenshots-with-phantomjs

var URLS =[];

var SCREENSHOT_WIDTH = 1280; 
var SCREENSHOT_HEIGHT = 768; 
var LOAD_WAIT_TIME = 5000; 

var getPageTitle = function(page){
    var documentTitle = page.evaluate(function(){
        return document.title; 
    })
    console.log("getting title:", documentTitle)
    return documentTitle; 
}

var getPageHeight = function(page){
    var documentHeight = page.evaluate(function() { 
        return document.body.offsetHeight; 
    })
    console.log("getting height:", documentHeight)
    return documentHeight; 
}

var renderPage = function(page,index,URLS){

    var title =  URLS[index].substring(7);

    var pageHeight = getPageHeight(page); 

    page.clipRect = {
        top:0,left:0,width: SCREENSHOT_WIDTH, 
        height: SCREENSHOT_HEIGHT
    };
    page.render('/home/bionicte/public_html/gtown/screenshots/'+title+".jpeg" , {format: 'jpeg', quality: '80'});
    console.log("rendered:", title+".png")
}

var exitIfLast = function(index,array){
    console.log(array.length - index-1, "more screenshots to go!")
    console.log("~~~~~~~~~~~~~~")
    if (index == array.length-1){
        console.log("exiting phantomjs")
        phantom.exit();
    }
}

var takeScreenshot = function(element){

    console.log("opening URL:", element)

    var page = require("webpage").create();

    page.viewportSize = {width:SCREENSHOT_WIDTH, height:SCREENSHOT_HEIGHT};

    page.open(element); 

    console.log("waiting for page to load...")

    page.onLoadFinished = function() {
        setTimeout(function(){
            console.log("that's long enough")
            renderPage(page,index,URLS)
            exitIfLast(index,URLS)
            index++; 
            takeScreenshot(URLS[index]);
        },LOAD_WAIT_TIME)
    }
}
var index = 0; 
takeScreenshot(URLS[index]);

Now we can take our Google Sheet and the information there and blend it with the screenshots. In this case, I’m using Vue but it could be done in anything.

<div id="sites" class="container main-content">
<nav class="navbar navbar-default navbar-fixed-top">
      <div class="container">
        <div class="navbar-header">
          <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#navbar" aria-expanded="false" aria-controls="navbar">
            <span class="sr-only">Toggle navigation</span>
            <span class="icon-bar"></span>
            <span class="icon-bar"></span>
            <span class="icon-bar"></span>
          </button>
          <a class="navbar-brand" href="https://georgetown.domains/"></a>
        </div>
        <div id="navbar" class="navbar-collapse collapse">
          <ul class="nav navbar-nav">            
          </ul>
          <ul class="nav navbar-nav navbar-right">
            <li><button @click="searchText='cat_a'" >cat a</button></li>
            <li><button @click="searchText='cat_b'" >cat b</button></li>
            <li><button @click="searchText='cat_c'" >cat c</button></li>
            <li><button @click="searchText=''" >reset</button></li>   
            <li><input v-model="searchText"></li>     
            <li></li>
          </ul>
        </div><!--/.nav-collapse -->
      </div>
    </nav>

  <div class="loading">
      <i v-if="!sites" class="fa fa-spinner fa-spin loading"></i>
  </div>
  <div class="row">
    <div v-for="site in filterBy(sites, searchText) " :key="this.sites" class="col-md-4 the-blog item" transition="site" :style="{backgroundColor: randomColor()}">
        <a :href="theLink(site)" target="_blank" class="commit"> 
          <div class="site-info">
            <div class="title" v-html="theTitle(site)" ></div>
            <div class="description" v-html="theDescription(site)">            
            </div>
       </a>     
          </div>  
            <div>
              <img class="img-fluid" :src="getThumbnail(site)" width="100%" height="auto"/>           
            </div>
        <div class="extra-info">
          <a :href="dataLink(site)" target="_blank"><i class="fa fa-user-circle-o" aria-hidden="true"></i></a> // 
          <a :href="timeLink(site)" target="_blank"><i class="fa fa-calendar-o" aria-hidden="true"></i></a>
        </div>
      <!--<div :data-url="theLink(site)" class="load" >
         {{site.gsx$url.$t.substring(7)}}
        
           <button @click="fetchPosts(site);select($event)" :id="theId(site)">Click me</button>      

      </div>    -->
    </div>      
  </div>  
</div>

The Vue end of that HTML template looks like so.

Vue.use(VueLazyload)

var spreadsheetID = "YOUR-SPREADSHEET-ID";
                      
// Make sure it is public or set to Anyone with link can view 
var blogURL = "https://spreadsheets.google.com/feeds/list/" + spreadsheetID + "/1/public/values?alt=json";

var blog = new Vue({
  el: '#sites',
  
  data: {
    sites: null,
    posts: null,
    searchText: null
  },
  
  watch: {
    currentPage: 'fetchSites',
    currentPage: 'fetchPosts',
    currentPage: 'searchBy'
  },

  created: function () {
    this.fetchSites()
  },
  

  methods: {
    fetchSites: function () {
      var xhr = new XMLHttpRequest()
      var self = this
      xhr.open('GET', blogURL  )
      xhr.onload = function () {
            this.loading = true //for the loading icon 
        self.sites = JSON.parse(xhr.responseText)
        self.sites = self.sites.feed.entry  
            this.loading = false;      
      }
      xhr.send()
    },
        
    theTitle: function(site) {
      return site.gsx$title.$t;
    },

    theLink: function(site) {
      return site.gsx$url.$t
    },

    theId: function(site){
      var shortUrl = site.gsx$url.$t.substring(7)
      return shortUrl
    },


    theDescription: function(site){
      return site.gsx$description.$t;
    },

    getThumbnail: function(site){
      var stem = site.gsx$url.$t
      stem = stem.substring(7)
      return 'screenshots/' + stem + '.jpeg'
    },

    fetchPosts: function(post) {
      var postsUrl = post.gsx$url.$t + '/wp-json/wp/v2/posts/?per_page=3'
      console.log(postsUrl)
      var xhr = new XMLHttpRequest()
      xhr.timeout = 2000 // time in milliseconds
      var sitePosts = this
      xhr.open('GET', postsUrl  )
      xhr.onload = function () {
        sitePosts.posts = JSON.parse(xhr.responseText)                
      }
      xhr.send(null)
      alert(sitePosts.posts)
      return sitePosts.posts      
      
    },

    postTitle: function(thePosts){
      return sitePosts.posts[0].title.rendered
    },

    select: function(event){
      targetId = event.currentTarget.id
      console.log('id =' +targetId)
    },

    writePosts: function(id){
      document.getElementById(id).innerHTML = "New text!";
    },

    searchBy: function(cat){
      searchText = cat
      console.log(searchText)      
      return searchText

    },

    randomColor: function(){
      var bgColors = ['#72984b', '#e1261c', '#8a2432', '#b9d9ec', '#f9e08e ']
      var rand = Math.floor(Math.random() * bgColors.length);
      console.log(bgColors[rand])
      return bgColors[rand];
    },

    dataLink: function(site){
      return url = 'http://bionicteaching.com/gtown/gtown.html#'+site.gsx$url.$t;
    },

     timeLink: function(site){
      return url = 'http://bionicteaching.com/gtown/blogtime#'+site.gsx$url.$t.substring(7);
    }

  },
  mounted() {
    var buttons = this.$el.getElementsByClassName('load');
    console.log(buttons);
    for (i = 0; i < buttons.length; i++){
      var url = buttons[i].dataset.url;
      console.log(url);
    }
  } 
});

Turns out there is a whoooole lot of stuff I haven’t written up fully and I’m getting tired so consider this a rough round one for this discussion.


1 As opposed to communal sites, like phpBB, which have different options.

2 Apathy is probably worse than hatred in this scenario.

3 As at least two people have messaged me– even Gangnam Style cannot reign forever.

4 I don’t mean that in a derogatory way. There may be a better word– something that means showing good things to encourage good things

4 thoughts on “Community- Technically Speaking

  1. The Just a Community Organizer presentation Marie, Yianna, and you gave at Domains2017 was among my absolute favorites. The community dashboard you all demonstrated is something I very much want to replicate here. Thanks for this code share, and whatever external motivation is necessary for you to keep sharing the whoooooole lot of other stuff that will help folks like me, shout it out because, for instance, I just read that Insomnia Cookies delivers where you are and I’m very willing to bribe you. I am getting the gist of what needs to happen. Really looking forward to future posts about this, too. Thanks!

Comments are closed.