Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
신경망:공동_방법 [2022/12/05 15:49] – [공동 방법(cavity method)] jiwon신경망:공동_방법 [2023/09/05 15:46] (current) – external edit 127.0.0.1
Line 48: Line 48:
 가 된다. 가 된다.
  
 +마찬가지로 아래 그림의 회색 영역처럼 스핀 변수 하나와 인접한 기능 노드들을 한꺼번에 추가하는 경우도 생각해볼 수 있을 것이다.
 +{{ :신경망:fig2.3.png?500 |}}
 +이 때 새로운 해밀토니안은
 +$$H^{\text{new}} = H^{\text{old}} -\sum_{b\in\partial i}J_b\prod_{j\in\partial b}\sigma_j$$
 +이고, 분배함수는
 +\begin{align*}
 +Z^{\text{new}}=&\sum_{\sigma^{\text{old}}}\sum_{\sigma_i}\exp\left(-\beta H^{\text{old}} + \beta\sum_{b\in\partial i}J_b\prod_{j\in\partial b}\sigma_j\right)\\
 +=&\sum_{\sigma^{\text{old}}}\sum_{\sigma_i}\exp\left(-\beta H^{\text{old}} + \beta\sum_{b\in\partial i}J_b\sigma_i\prod_{j\in\partial b\backslash i}\sigma_j\right)\\
 +=&Z^{\text{old}}\sum_{\sigma^{\text{old}}}\sum_{\sigma_i}\frac{\exp\left(-\beta H^{\text{old}}\right)}{Z^{\text{old}}}\exp\left(\beta\sum_{b\in\partial i}J_b\sigma_i\prod_{j\in\partial b\backslash i}\sigma_j\right)
 +\end{align*}
 +가 된다. 위에서와 같이
 +$$P_{\text{cavity}}(\{\sigma_j\vert j\in\partial b\backslash i;b\in\partial i\}) = \sum_{\{\sigma_j\vert j\not\in\partial b\backslash i;b\not\in\partial i\}}\frac{\exp\left(-\beta H^{\text{old}}\right)}{Z^{\text{old}}}\approx \left(\prod_{j\in\partial b\backslash i}\prod_{b\in\partial i}q_{j\rightarrow b}(\sigma_j)\right)$$
 +로 근사하면
 +\begin{align*}
 +Z^{\text{new}}=&Z^{\text{old}}\sum_{\{\sigma_j\vert j\in\partial b\backslash i;b\in\partial i\}}\sum_{\sigma_i}P_{\text{cavity}}(\{\sigma_j\vert j\in\partial b\backslash i;b\in\partial i\})\exp\left(\beta\sum_{b\in\partial i}J_b\sigma_i\prod_{j\in\partial b\backslash i}\sigma_j\right)\\
  
 +\approx&Z^{\text{old}}\sum_{\{\sigma_j\vert j\in\partial b\backslash i;b\in\partial i\}}\sum_{\sigma_i}\left(\prod_{j\in\partial b\backslash i}\prod_{b\in\partial i}q_{j\rightarrow b}(\sigma_j)\right)\exp\left(\beta\sum_{b\in\partial i}J_b\sigma_i\prod_{j\in\partial b\backslash i}\sigma_j\right)\\
  
 +=&Z^{\text{old}}\sum_{\sigma_i}\prod_{b\in\partial i}\left[\sum_{\{\sigma_j\vert j\in\partial b\backslash i\}}\prod_{j\in\partial b\backslash i}q_{j\rightarrow b}(\sigma_j)\exp\left(\beta J_b\sigma_i\prod_{j\in\partial b\backslash i}\sigma_j\right)\right]\\
 +\end{align*}
 +로 쓸 수 있다. 괄호 안의 항을
 +\begin{align*}
 +\Lambda_{b\rightarrow i}^{\sigma_i} \equiv&\sum_{\{\sigma_j\vert j\in\partial b\backslash i\}}\prod_{j\in\partial b\backslash i}q_{j\rightarrow b}(\sigma_j)\exp\left(\beta J_b\sigma_i\prod_{j\in\partial b\backslash i}\sigma_j\right)\\
 +=&\sum_{\{\sigma_j\vert j\in\partial b\backslash i\}}\prod_{j\in\partial b\backslash i}\frac{1+\sigma_jm_{j\rightarrow b}}2\cosh(\beta J_b)\left(1+\sigma_i\prod_{j\in\partial b\backslash i}\sigma_j\tanh(\beta J_b)\right)
 +\end{align*}
 +로 정의하고, 각 항을 나누어 계산하면
 +$$\sum_{\{\sigma_j\vert j\in\partial b\backslash i\}}\prod_{j\in\partial b\backslash i}\frac{1+\sigma_jm_{j\rightarrow b}}2 = 1$$
 +\begin{align*}
 +&\sum_{\{\sigma_j\vert j\in\partial b\backslash i\}}\prod_{j\in\partial b\backslash i}\frac{1+\sigma_jm_{j\rightarrow b}}2\sigma_i\sigma_j\tanh(\beta J_b)\\
 +=&\sigma_i\tanh(\beta J_b)\sum_{\{\sigma_j\vert j\in\partial b\backslash i\}}\prod_{j\in\partial b\backslash i}\frac{\sigma_j+m_{j\rightarrow b}}2\\
 +=&\sigma_i\tanh(\beta J_b)\prod_{j\in\partial b\backslash i}m_{j\rightarrow b}
 +\end{align*}
 +$$\Lambda_{b\rightarrow i}^{\sigma_i} = \cosh(\beta J_b)\left(1+\sigma_i\tanh(\beta J_b)\prod_{j\in\partial b\backslash i}m_{j\rightarrow b}\right)$$
 +가 되므로 새로운 분배함수는
 +$$Z^{\text{new}} = Z^{\text{old}}\sum_{\sigma_i}\prod_{b\in\partial i}\Lambda_{b\rightarrow i}^{\sigma_i} =  Z^{\text{old}}\left(\prod_{b\in\partial i}\Lambda_{b\rightarrow i}^++\prod_{b\in\partial i}\Lambda_{b\rightarrow i}^-\right)$$
 +와 같이 쓸 수 있다.
  
 +====참고문헌==== 
 +Haiping Huang, Statistical Physics of Neural Networks, Springer, 2021
  
  
  • 신경망/공동_방법.1670222980.txt.gz
  • Last modified: 2023/09/05 15:46
  • (external edit)