From af0683890cd29169f79bb0b27e2a897d6f8aaa7e Mon Sep 17 00:00:00 2001 From: Adam <24621027+adoyle0@users.noreply.github.com> Date: Sun, 5 Feb 2023 21:30:53 -0500 Subject: [PATCH] whitespace and broken links --- doordesk/public/blog/000000000-swim.html | 6 +++--- doordesk/public/blog/20220614-reddit.html | 8 ++++---- doordesk/public/blog/20220701-progress.html | 6 ++---- 3 files changed, 9 insertions(+), 11 deletions(-) diff --git a/doordesk/public/blog/000000000-swim.html b/doordesk/public/blog/000000000-swim.html index 2c9201a..01cc813 100644 --- a/doordesk/public/blog/000000000-swim.html +++ b/doordesk/public/blog/000000000-swim.html @@ -128,12 +128,12 @@ can do to make a living WITHIN that way of life. But you say, "I don't know where to look; I don't know what to look for."
-+
And there's the crux. Is it worth giving up what I have to look for something better? I don't know—is it? Who can make that decision but you? But even by DECIDING TO LOOK, you go a long way toward making the choice. -
-+
+If I don't call this to a halt, I'm going to find myself writing a book. I hope it's not as confusing as it looks at first glance. Keep in mind, of course, that this is MY WAY of looking at things. I happen to think that it's pretty diff --git a/doordesk/public/blog/20220614-reddit.html b/doordesk/public/blog/20220614-reddit.html index 830a076..449b97f 100644 --- a/doordesk/public/blog/20220614-reddit.html +++ b/doordesk/public/blog/20220614-reddit.html @@ -22,7 +22,7 @@
- Scrapey is my scraper script that takes a snapshot + Scrapey is my scraper script that takes a snapshot of Reddit/r/all hot and saves the data to a .csv file including a calculated age for each post about every 12 minutes. Run time is about 2 minutes per iteration and each time adds about 100 unique posts to the list while updating any post it's already seen. @@ -33,7 +33,7 @@
- Next I take a quick look to see what looks useful, what + Next I take a quick look to see what looks useful, what doesn't, and check for outliers that will throw off the model. There were a few outliers to drop from the num_comments column.
@@ -54,7 +54,7 @@ for further processing.Cleaning the data further consists of:
+Cleaning the data further consists of:
Some Predictors from Top 25:
After finding a number of ways not to begin the project formerly known as my capstone, I've finally settled on a - dataset. The project is about detecting bots, starting with twitter. I've + dataset. + The project is about detecting bots, starting with twitter. I've studied a few different