Links to Helpful Things
[stack-exchange]
Unix Partition Mount On Startup
[one-liner] conda search tensorflow | less
[ubuntu][tip] As of Nov 2019, sudo apt-get install hugo
installs an old version (circa 2016). Install binaries from site instead.
[github][tensorflow]
Can't find convolution algorithm, cuDNN failed to initialize.
[self-hosted][academia] LaTeX for Journal Publications (ongoing notes)
[stack-overflow]
Keras General Loss Function from any defined layers/variables
Setting up SMTP for Gmail
January 2, 2020
Also known as responding "as another account". I couldn't find this when I googled. It's relatively easy, but I thought I should write it down.
- Make a gmail account, and (optionally) set up automatic forwarding to that @gmail.com account from your host (e.g. the @csail.mit.edu account).
- Go to Settings, then Accounts, then "Send Mail As". At the time of writing, settings was the gear symbol in the righthand corner of the content box (above inbox, below header/searchbar).
- Add in a new email to send from. This requires the SMTP information (address, usualling outgoing.mail.edu, protocol), then some handshake stuff to make sure you're you. If you don't have this information, your admin should. MIT's can be found here: [SMTP].
- After verification, for each new email you can set which account you're sending from by clicking/tapping on the "From" tab of the header of the email composition window. This can be defaulted from the same interface.
Batch Sparse Matrix Multiply in Tensorflow
May 15, 2020
Tensorflow SparseTensor
class does not support broadcasted matrix
multiplies as of time-of-writing. This means that if you expect
tf.sparse.matmul
or tf.sparse_dense_matmul
to behave the same as tf.matmul
you'll end up with an exception for rank>=3
sparse tensors.
Still, there are ways around this for particular cases. If you know the
SparseTensor
is the same for across a whole batch, you can use
output = \
tf.map_fn( lambda x : tf.sparse.sparse_dense_matmul( sp_ten, x ), dn_ten )
Alternatively, if you know the batch size, you can pair them together.
tens = (sp_ten, dn_ten)
output = \
tf.map_fn( lambda x : tf.sparse.sparse_dense_matmul( x[0], x[1] ), tens )