diff --git a/readme.md b/readme.md
index 617bb572bd501575190040c258dbf690061003e8..2cb21ef94bc21d52733802e8f24af69d98b1203e 100644
--- a/readme.md
+++ b/readme.md
@@ -1,12 +1,37 @@
+# Python - Intro to Back-Propagation
 
-# CS5300 a1
 
+## Description
+Attempts to implement Neural Network back-propagation. Once again, works with the Boston Housing data from the previous
+assignment. Calculations should be much more accurate than the previous assignment.
 
-## Author
-Brandon Rodriguez
 
+## Back-Propagation
+Back-propagation is one of the most basic and important aspects of modern Neural Nets.
 
-## Description
 
-Neural Network back-propagation project.
+### The Concept
+Essentially, it plots the dataset onto a multi-dimensional graph, where each data attribute is treated as a new
+dimension. The multi-dimensional plane will have theoretical hills and valleys, where the hills represent "local max"
+values and the valleys represent "local min" values.
+
+The network tries to determine which valley is the deepest, because it's the "global minimum" that will be
+the "most correct".
+
+In theory, this n-dimensional graph directly translates to some equation, such as ax<sub>1</sub> + bx<sub>2</sub> + ... +
+yx<sub>n-1</sub> + zx<sub>n</sub>, such that there is some [a, b, ..., y, z] location on the graph that will correctly
+solve most (or all) records in the dataset of [x<sub>1</sub>, x<sub>2</sub>, ..., x<sub>n-1</sub>, x<sub>n</sub>].
+
+### Back-Prop in Practice
+How it actually works is that the network has two modes. It can either be in "training" mode, where it checks against
+known solved data records, and attempts to learn how to replicate the answer. Or it will run on unsolved records, using
+what it learned in training to (hopefully) correctly solve these sets.
+
+When training, the network will compute with a given record. Then it examines against a known "correct" answer to
+see how close it was.
+
+At this point, if it was correct, then it pats itself on the back and moves on the the next record.
 
+If it was wrong, then it will work backwards to "nudge" itself closer to a correct solution. Going back to the
+theoretical concept, this is essentially akin to "updating some values of [a, b, ..., y, z], such that the network is
+moved closer to the global valley, or at least one of the local valleys."