An Example Post

Here is some extra detail about the post.

Informal justifications

In Auto-Encoding Variational Bayes, (missing reference), Kingma presents an unbiased, differentiable, and scalable estimator for the ELBO in variational inference. A key idea behind this estimator is the reparameterization trick. But why do we need this trick in the first place? When first learning about variational autoencoders (VAEs), I tried to find an answer online but found the explanations too informal…

e=mc2.(1) e = mc^2. \tag{1}

= \begin{aligned} &= \end{aligned}

Cool!

import numpy as np
import jax.numpy as jnp
from jax import grad, jit, vmap
from jax import random
import os
import sys


# check executable path
print(sys.executable)

There are something:

P(X=2)=13 \mathbb{P}(X=2) = \frac{1}{3}

wish you can do


<div id='posts' class='section'>
    {%for post in site.posts%}
        <div class='post-row'>
            <p class='post-title'>
                <a href="{{ post.url }}">
                    {{ post.title }}
                </a>
            </p>
            <p class='post-date'>
                {{ post.date | date_to_long_string }}
            </p>
        </div>
        <p class='post-subtitle'>
            {{ post.subtitle }}
        </p>
    {% endfor %}
</div>

Citations

According to (Bishop, 2006), machine learning … You can cite another one (Vaswani et al., 2017). You can also do inline citation, Bishop (2006) argues that …

Figures

Without zoom

A demo figure
Figure 1. Diagram of rejection sampling. The density q(z)q(\mathbf{z}) must be always greater than p(x)p(\mathbf{x}). A new sample is rejected if it falls in the gray region and accepted otherwise. These accepted samples are distributed according to p(x)p(\mathbf{x}). This is achieved by sampling ziz_i from q(z)q(\mathbf{z}), and then sampling uniformly from [0,kq(zi)][0, k q(z_i)]. Samples under the curve p(zi)p(z_i) are accepted.

With zoom

OECD HAN database illustration
Figure 1. OECD HAN database structure: there are four tables, in which HAN_PERSON is the correspondence table which include all cleaned names.

Tables

Table 1. First six rows of HAN_PATENTS, which starts at HAN_ID = 4.
HAN_ID HARM_ID Appln_id Publn_auth Patent_number
4 4 311606173 US US8668089
7 7 439191607 US US9409947
7 7 518367793 US US10836794
10 10 365204276 US US8513480
14 14 336903179 WO WO2011112122
14 14 363622722 WO WO2012064218

Table and figure aligned

Top 5 IPC class Count
B29C70-38 14
B29C70-54 13
G08G5-00 13
B64C39-02 13
G05D1-00 12
Airbus patents distribution
Figure 4. The IPC distribution of Airbus Defence (DE)'s patents; section B is about performing operations and transport; section H is electricity; section G is about physics. Remark : the total number of granted patents is 538 but the count of total IPC classes is 1763, which means some patents are assigned to multiple classes.

Table over flow if you need it

Analysis Patent statistics Purposes
Citation analysis Forward citations Measure technological value
  Backward citations Find knowledge source
Patent counts analysis Patent counts Observe patent portfolio
  RTA (Revealed Technology Advance) Identify core technological competence
  PS (Patent Share)  
Technology class analysis Generality Measure endogenous applicability to different technological fields
  Originality Measure knowledge absorption from different technological fields
Inventor analysis Inventor counts Measure invention quality
    Measure absorptive capability
  Inventor Identify specific inventors’ info such as star engineers
    Follow mobility of R&D personnel
  1. Bishop, C. M. (2006). Pattern Recognition and Machine Learning.
  2. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30.