more doc ideas

This commit is contained in:
Jeremy Howard
2018-04-02 16:13:58 -07:00
parent a31b2416eb
commit 5214032025
6 changed files with 92 additions and 9 deletions

View File

@@ -20,7 +20,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
@@ -29,6 +29,15 @@
"torch.cuda.set_device(1)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"?re.compile"
]
},
{
"cell_type": "markdown",
"metadata": {},

20
docs/README.md Normal file
View File

@@ -0,0 +1,20 @@
# fastai doc project
The fastai doc project is just getting underway! So now is a great time to get involved. Here are some thoughts and guidelines to help you get oriented...
## Project goals and approach
The idea of this project is to create documentation that makes readers say "wow that's the most fantastic documentation I've ever read". So... no pressure. :) How do we do this? By taking the philosophies demonstrated in fast.ai's courses and bringing them to the world of documentation. Here are a few guidelines to consider:
- Assume the reader is intelligent and interested
- Don't assume the reader has any specific knowledge about the field you're documenting
- If you need the reader to have some knowledge to understand your documentation, and there is some effective external resource they can learn from, point them there rather than trying to do it all yourself
- Use code to describe what's going on where possible, not math
- Create a notebook demonstrating the ideas you're documenting (include the notebook in this repo) and show examples from the notebook directly in your docs
- Use a top-down approach; that is, first explain what problem the code is meant to solve, and at a high level how it solves it, and then go deeper into the details once those concepts are well understood
- For common tasks, show full end-to-end examples of how to complete the task.
Use pictures, tables, analogies, and other explanatory devices (even embedded video!) wherever they can help the reader understand. Use hyperlinks liberally, both within these docs and to external resources.
We don't want this detailed documentation to create clutter in the code, and we also don't want to overwhelm the user when they just want a quick summary of what a method does. Therefore, docstrings should generally be limited to a single line. The python standard library is documented this way--for instance, the docstring for `re.compile()` is the single line "*Compile a regular expression pattern, returning a pattern object.*" But the full documentation of the `re` library on the python web site goes into detail about this method, how it's used, and its relation to other parts of the library.

View File

@@ -1,4 +1,4 @@
= fastai.transforms
= fastai.transforms
== Introduction and overview

View File

@@ -1,4 +1,6 @@
= fastai.transforms
Jeremy Howard and contributors
:toc:
== Introduction and overview
@@ -17,7 +19,7 @@ You can create custom transform pipelines using an approach like: ...
If you want to create a custom transform, you will need to : ...
== Class Transform(tfm_y=TfmType.NO)
== Class Transform [.small]#(tfm_y=TfmType.NO)#
.Abstract parent for all transforms.
@@ -26,7 +28,7 @@ Override do_transform to implement transformation of a single object.
=== Arguments
tfm_y (type TfmType, default TfmType.NO)::
Type of transform. For details, see #TfmType[TfmType]
Type of transform. For details, see xref:TfmType[TfmType]
=== Methods
@@ -35,4 +37,13 @@ A transform may include a random component. If it does, it will often need to tr
+
**NB:** Transformations are often run in multiple threads. Therefore any state must be stored in thread local storage. The `Transform` class provide a thread local `store` attribute for you to use. See {{xref RandomFlip}} for an example of how to use random state safely in `Transform` subclasses.
[[TfmType]]
== Class TfmType:IntEnum
.Type of transformation.
NO:: the default, y does not get transformed when x is transformed.
PIXEL:: x and y are images and should be transformed in the same way. _E.g.: image segmentation._
COORD:: y are coordinates (i.e bounding boxes)
CLASS:: y are class labels (same behaviour as PIXEL, except no normalization)

View File

@@ -5,6 +5,7 @@
<!--[if IE]><meta http-equiv="X-UA-Compatible" content="IE=edge"><![endif]-->
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="generator" content="Asciidoctor 1.5.6.2">
<meta name="author" content="Jeremy Howard and contributors">
<title>fastai.transforms</title>
<link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Open+Sans:300,300italic,400,400italic,600,600italic%7CNoto+Serif:400,400italic,700,700italic%7CDroid+Sans+Mono:400,700">
<style>
@@ -428,6 +429,22 @@ body.book #toc,body.book #preamble,body.book h1.sect0,body.book .sect1>h2{page-b
<body class="article">
<div id="header">
<h1>fastai.transforms</h1>
<div class="details">
<span id="author" class="author">Jeremy Howard and contributors</span><br>
</div>
<div id="toc" class="toc">
<div id="toctitle">Table of Contents</div>
<ul class="sectlevel1">
<li><a href="#_introduction_and_overview">Introduction and overview</a></li>
<li><a href="#_class_transform_span_class_small_tfm_y_tfmtype_no_span">Class Transform <span class="small">(tfm_y=TfmType.NO)</span></a>
<ul class="sectlevel2">
<li><a href="#_arguments">Arguments</a></li>
<li><a href="#_methods">Methods</a></li>
</ul>
</li>
<li><a href="#TfmType">Class TfmType:IntEnum</a></li>
</ul>
</div>
</div>
<div id="content">
<div class="sect1">
@@ -456,7 +473,7 @@ body.book #toc,body.book #preamble,body.book h1.sect0,body.book .sect1>h2{page-b
</div>
</div>
<div class="sect1">
<h2 id="_class_transform_tfm_y_tfmtype_no">Class Transform(tfm_y=TfmType.NO)</h2>
<h2 id="_class_transform_span_class_small_tfm_y_tfmtype_no_span">Class Transform <span class="small">(tfm_y=TfmType.NO)</span></h2>
<div class="sectionbody">
<div class="paragraph">
<div class="title">Abstract parent for all transforms.</div>
@@ -468,7 +485,7 @@ body.book #toc,body.book #preamble,body.book h1.sect0,body.book .sect1>h2{page-b
<dl>
<dt class="hdlist1">tfm_y (type TfmType, default TfmType.NO)</dt>
<dd>
<p>Type of transform. For details, see #TfmType[TfmType]</p>
<p>Type of transform. For details, see <a href="#TfmType">TfmType</a></p>
</dd>
</dl>
</div>
@@ -489,10 +506,36 @@ body.book #toc,body.book #preamble,body.book h1.sect0,body.book .sect1>h2{page-b
</div>
</div>
</div>
<div class="sect1">
<h2 id="TfmType">Class TfmType:IntEnum</h2>
<div class="sectionbody">
<div class="dlist">
<div class="title">Type of transformation.</div>
<dl>
<dt class="hdlist1">NO</dt>
<dd>
<p>the default, y does not get transformed when x is transformed.</p>
</dd>
<dt class="hdlist1">PIXEL</dt>
<dd>
<p>x and y are images and should be transformed in the same way. <em>E.g.: image segmentation.</em></p>
</dd>
<dt class="hdlist1">COORD</dt>
<dd>
<p>y are coordinates (i.e bounding boxes)</p>
</dd>
<dt class="hdlist1">CLASS</dt>
<dd>
<p>y are class labels (same behaviour as PIXEL, except no normalization)</p>
</dd>
</dl>
</div>
</div>
</div>
</div>
<div id="footer">
<div id="footer-text">
Last updated 2018-04-02 14:58:28 PDT
Last updated 2018-04-02 16:11:49 PDT
</div>
</div>
</body>

View File

@@ -423,7 +423,7 @@ class RandomRotateZoom(CoordTransform):
self.pass_t = PassThru()
self.cum_ps = np.cumsum(ps)
assert self.cum_ps[3]==1, 'probabilites do not sum to 1; they sum to %d' % self.cum_ps[3]
def set_state(self):
self.store.choice = self.cum_ps[3]*random.random()
for i in range(len(self.transforms)):
@@ -431,7 +431,7 @@ class RandomRotateZoom(CoordTransform):
self.store.trans = self.transforms[i]
return
self.store.trans = self.pass_t
def __call__(self, x, y):
self.set_state()
return self.store.trans(x, y)