<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="assets/xml/rss.xsl" media="all"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Joshua Hernandez's Blog</title><link>http://joshuahernandezblog.com/</link><description>This is a demo site for Nikola.</description><atom:link href="http://joshuahernandezblog.com/rss.xml" rel="self" type="application/rss+xml"></atom:link><language>en</language><copyright>Contents © 2019 &lt;a href="mailto:Joshua.M.S.Hernandez@gmail.com"&gt;Joshua Hernandez&lt;/a&gt; </copyright><lastBuildDate>Wed, 16 Jan 2019 22:40:53 GMT</lastBuildDate><generator>Nikola (getnikola.com)</generator><docs>http://blogs.law.harvard.edu/tech/rss</docs><item><title>Object Detection Part 1</title><link>http://joshuahernandezblog.com/blog/object-detection-part-1/</link><dc:creator>Joshua Hernandez</dc:creator><description>&lt;div tabindex="-1" id="notebook" class="border-box-sizing"&gt;
    &lt;div class="container" id="notebook-container"&gt;

&lt;div class="cell border-box-sizing text_cell rendered"&gt;&lt;div class="prompt input_prompt"&gt;
&lt;/div&gt;
&lt;div class="inner_cell"&gt;
&lt;div class="text_cell_render border-box-sizing rendered_html"&gt;
&lt;p&gt;My goal in this series is to deploy a neural network capable of identifying and localizing pedestrians in an image (the combination of image classification and localization is called object detection). In the first part of the series I will do this by downloading a pretrained model and using transfer learning to fine tune it for my problem. In the second part, I will try to create and deploy a model from scratch.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="cell border-box-sizing text_cell rendered"&gt;&lt;div class="prompt input_prompt"&gt;
&lt;/div&gt;
&lt;div class="inner_cell"&gt;
&lt;div class="text_cell_render border-box-sizing rendered_html"&gt;
&lt;p&gt;&lt;a href="http://joshuahernandezblog.com/blog/object-detection-part-1/"&gt;Read more…&lt;/a&gt; (20 min remaining to read)&lt;/p&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description><guid>http://joshuahernandezblog.com/blog/object-detection-part-1/</guid><pubDate>Wed, 16 Jan 2019 19:01:44 GMT</pubDate></item><item><title>A Perfectly Cromulent Intro to Simpson's Paradox</title><link>http://joshuahernandezblog.com/blog/a-perfectly-cromulent-intro-to-simpsons-paradox/</link><dc:creator>Joshua Hernandez</dc:creator><description>&lt;div tabindex="-1" id="notebook" class="border-box-sizing"&gt;
    &lt;div class="container" id="notebook-container"&gt;

&lt;div class="cell border-box-sizing text_cell rendered"&gt;&lt;div class="prompt input_prompt"&gt;
&lt;/div&gt;
&lt;div class="inner_cell"&gt;
&lt;div class="text_cell_render border-box-sizing rendered_html"&gt;
&lt;p&gt;&lt;i&gt;In this post, we will go over one of my favorite statistical phenomenons, Simpson's paradox, using interactive data modules.  &lt;/i&gt;
&lt;/p&gt;&lt;p&gt;&lt;a href="http://joshuahernandezblog.com/blog/a-perfectly-cromulent-intro-to-simpsons-paradox/"&gt;Read more…&lt;/a&gt; (9 min remaining to read)&lt;/p&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description><category>Quick Post</category><category>Statistics</category><guid>http://joshuahernandezblog.com/blog/a-perfectly-cromulent-intro-to-simpsons-paradox/</guid><pubDate>Fri, 15 Sep 2017 05:57:28 GMT</pubDate></item><item><title>You've Got Mail: Building a Spam Filter with R</title><link>http://joshuahernandezblog.com/blog/Spam/youve-got-mail-building-a-spam-filter-with-r/</link><dc:creator>Joshua Hernandez</dc:creator><description>&lt;div tabindex="-1" id="notebook" class="border-box-sizing"&gt;
    &lt;div class="container" id="notebook-container"&gt;

&lt;div class="cell border-box-sizing text_cell rendered"&gt;&lt;div class="prompt input_prompt"&gt;
&lt;/div&gt;
&lt;div class="inner_cell"&gt;
&lt;div class="text_cell_render border-box-sizing rendered_html"&gt;
&lt;p&gt;&lt;i&gt;We take an unorthodox approach to building a spam filter in R, using k-means clustering to filter out spam. The first half will go over the theory behind spam filters and clustering algorithms. The second half will go over what I did to build a spam filter with R. &lt;/i&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="http://joshuahernandezblog.com/blog/Spam/youve-got-mail-building-a-spam-filter-with-r/"&gt;Read more…&lt;/a&gt; (12 min remaining to read)&lt;/p&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description><guid>http://joshuahernandezblog.com/blog/Spam/youve-got-mail-building-a-spam-filter-with-r/</guid><pubDate>Fri, 25 Aug 2017 23:26:37 GMT</pubDate></item><item><title>Into the Woods: Visualizing Random Forests with R</title><link>http://joshuahernandezblog.com/blog/Into%20the%20Woods/into-the-woods-visualizing-random-forests-with-r/</link><dc:creator>Joshua Hernandez</dc:creator><description>&lt;div tabindex="-1" id="notebook" class="border-box-sizing"&gt;
    &lt;div class="container" id="notebook-container"&gt;

&lt;div class="cell border-box-sizing text_cell rendered"&gt;&lt;div class="prompt input_prompt"&gt;
&lt;/div&gt;
&lt;div class="inner_cell"&gt;
&lt;div class="text_cell_render border-box-sizing rendered_html"&gt;
&lt;p&gt;&lt;i&gt; You've probably heard random forest models described as "black boxes," models that show an input and an output and nothing in between. In this post, we go over techniques to show what a random forest model is doing, to make it less of a black box. &lt;/i&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="http://joshuahernandezblog.com/blog/Into%20the%20Woods/into-the-woods-visualizing-random-forests-with-r/"&gt;Read more…&lt;/a&gt; (3 min remaining to read)&lt;/p&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description><category>Machine Learning</category><category>Quick Post</category><category>Random Forest</category><guid>http://joshuahernandezblog.com/blog/Into%20the%20Woods/into-the-woods-visualizing-random-forests-with-r/</guid><pubDate>Mon, 21 Aug 2017 13:48:17 GMT</pubDate></item><item><title>Web Scraping with R</title><link>http://joshuahernandezblog.com/blog/MovieProject/web-scraping-with-r/</link><dc:creator>Joshua Hernandez</dc:creator><description>&lt;div tabindex="-1" id="notebook" class="border-box-sizing"&gt;
    &lt;div class="container" id="notebook-container"&gt;

&lt;div class="cell border-box-sizing text_cell rendered"&gt;&lt;div class="prompt input_prompt"&gt;
&lt;/div&gt;
&lt;div class="inner_cell"&gt;
&lt;div class="text_cell_render border-box-sizing rendered_html"&gt;
&lt;p&gt;&lt;i&gt; I go over how to use R to harvest information from web pages. This post chronicles my use of rvest to harvest movie information from Rotten Tomatoes to explore the difference between professional critics and general audiences. &lt;/i&gt;
&lt;/p&gt;&lt;p&gt;&lt;a href="http://joshuahernandezblog.com/blog/MovieProject/web-scraping-with-r/"&gt;Read more…&lt;/a&gt; (23 min remaining to read)&lt;/p&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description><category>data extraction</category><category>R</category><category>rvest</category><category>Web Scraping</category><guid>http://joshuahernandezblog.com/blog/MovieProject/web-scraping-with-r/</guid><pubDate>Fri, 18 Aug 2017 06:14:20 GMT</pubDate></item></channel></rss>