{"id":4734,"date":"2025-03-24T16:01:16","date_gmt":"2025-03-24T16:01:16","guid":{"rendered":"https:\/\/ccitonline.com\/wp\/?p=4734"},"modified":"2025-03-24T16:01:16","modified_gmt":"2025-03-24T16:01:16","slug":"basic-principle-of-pinn-and-simulation-of-1d-heat-conduction-using-pinn-approach","status":"publish","type":"post","link":"https:\/\/ccitonline.com\/wp\/2025\/03\/24\/basic-principle-of-pinn-and-simulation-of-1d-heat-conduction-using-pinn-approach\/","title":{"rendered":"Basic Principle of PINN and Simulation of 1d Heat Conduction Using PINN Approach"},"content":{"rendered":"\n<p>Zahran Nadhif Afdallah Malik-2306155451<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p>Physical-Informed Neural Networks (PINNs) are a type of neural network that incorporates physical laws or constraints into their architecture during training. This approach allows PINNs to solve complex problems involving real-world systems more effectively by leveraging known physical principles.<\/p>\n\n\n\n<p>Here\u2019s an overview of PINNs:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Neural Networks and Deep Learning<\/strong>: PINNs build upon traditional neural networks, which are designed for pattern recognition and data processing. However, PINNs extend these models by integrating physical constraints or equations.<\/li>\n\n\n\n<li><strong>Partial Differential Equations (PDEs)<\/strong>: PINNs incorporate PDEs, which describe how physical systems change over space and time. These equations are often derived from fundamental laws like conservation of mass, energy, momentum, or force.<\/li>\n\n\n\n<li><strong>Loss Functions<\/strong>: PINNs use loss functions that include terms enforcing the physical constraints. This ensures that the neural network\u2019s predictions satisfy these physical laws, improving accuracy and physical consistency.<\/li>\n\n\n\n<li><strong>Training Process<\/strong>: During training, PINNs are optimized using gradient descent, which minimizes a combination of prediction error (training loss) and constraint satisfaction (regularization loss).<\/li>\n\n\n\n<li><strong>Applications<\/strong>: PINNs are particularly useful for solving forward and inverse problems in domains such as fluid dynamics, materials science, climate modeling, and more.<\/li>\n<\/ol>\n\n\n\n<p>In summary, PINNs combine the power of neural networks with physical constraints, making them effective for real-world applications where understanding the underlying physics is crucial.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p>This time i will try to simulate a 1d heat conduction using PINN approach, here is the case:<\/p>\n\n\n\n<p>heat was given to a 1m stainless steel block. the temperature at one end is T0=100 celcius and the other end is 0 celcius, simulate the heat conduction using PINN approach:<\/p>\n\n\n\n<p><strong>DAI5 Framework<\/strong><\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Awareness: Physical-Informed Neural Networks (PINNs) are a type of artificial neural network that integrates physical laws and constraints into the learning process to enhance their performance in real-world applications. Unlike traditional neural networks, which learn patterns from data alone, PINNs incorporate known equations or principles from physics or other domains. This integration ensures that the network\u2019s predictions not only fit the training data but also adhere to the underlying physical laws, making them more accurate and reliable for scientific and engineering tasks. During training, PINNs use a loss function that includes terms enforcing physical constraints. For example, if modeling fluid flow, the network would ensure that conservation laws like mass or momentum are satisfied, improving its ability to generalize and predict behavior accurately. This approach is particularly valuable in fields such as climate modeling, material science, and engineering design, where understanding and respecting physical principles is crucial. <\/li>\n\n\n\n<li>Intention: simulating a 1d heat conduction using PINN approach<\/li>\n\n\n\n<li>Initial thingking<\/li>\n<\/ol>\n\n\n\n<p>1d heat conduction formula<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"163\" height=\"68\" src=\"https:\/\/ccitonline.com\/wp\/wp-content\/uploads\/2025\/03\/image-833.png\" alt=\"\" class=\"wp-image-4745\"\/><\/figure>\n\n\n\n<p>where t is time distribution, u is temperature distribution, and <em>a <\/em>is thermal conductivity<\/p>\n\n\n\n<p>4. Idealization<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Physical loss<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"317\" height=\"100\" src=\"https:\/\/ccitonline.com\/wp\/wp-content\/uploads\/2025\/03\/image-834.png\" alt=\"\" class=\"wp-image-4747\" srcset=\"https:\/\/ccitonline.com\/wp\/wp-content\/uploads\/2025\/03\/image-834.png 317w, https:\/\/ccitonline.com\/wp\/wp-content\/uploads\/2025\/03\/image-834-300x95.png 300w\" sizes=\"auto, (max-width: 317px) 100vw, 317px\" \/><\/figure>\n\n\n\n<ul class=\"wp-block-list\">\n<li>loss data<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"449\" height=\"104\" src=\"https:\/\/ccitonline.com\/wp\/wp-content\/uploads\/2025\/03\/image-835.png\" alt=\"\" class=\"wp-image-4752\" srcset=\"https:\/\/ccitonline.com\/wp\/wp-content\/uploads\/2025\/03\/image-835.png 449w, https:\/\/ccitonline.com\/wp\/wp-content\/uploads\/2025\/03\/image-835-300x69.png 300w\" sizes=\"auto, (max-width: 449px) 100vw, 449px\" \/><\/figure>\n\n\n\n<ul class=\"wp-block-list\">\n<li>total loss<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"364\" height=\"67\" src=\"https:\/\/ccitonline.com\/wp\/wp-content\/uploads\/2025\/03\/image-836.png\" alt=\"\" class=\"wp-image-4753\" srcset=\"https:\/\/ccitonline.com\/wp\/wp-content\/uploads\/2025\/03\/image-836.png 364w, https:\/\/ccitonline.com\/wp\/wp-content\/uploads\/2025\/03\/image-836-300x55.png 300w\" sizes=\"auto, (max-width: 364px) 100vw, 364px\" \/><\/figure>\n\n\n\n<p>and here are some idealization on the stainless steel block<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>the length is right 1m, no error<\/li>\n\n\n\n<li>the thermal conductivity is right 16,2 wm\/K<\/li>\n\n\n\n<li>the temperature is right T0=100 celcius at one end and T1=0 celcius at the other end<\/li>\n\n\n\n<li>steady state condition on the stainless steel block<\/li>\n<\/ul>\n\n\n\n<p>5. Instruction set<\/p>\n\n\n\n<p>next, i tried to simulate the heat conduction through python programming, here is the code:<\/p>\n\n\n\n<div class=\"wp-block-group is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-group is-layout-constrained wp-block-group-is-layout-constrained\">\n<pre class=\"wp-block-code\"><code># Import Libraries\n\nimport torch\n\nimport torch.nn as nn\n\nimport numpy as np\n\nimport matplotlib.pyplot as plt\n\n# Define PINN Class\n\nclass PINN(nn.Module):\n\n\u00a0 \u00a0 def __init__(self):\n\n\u00a0 \u00a0 \u00a0 \u00a0 super(PINN, self).__init__()\n\n\u00a0 \u00a0 \u00a0 \u00a0 self.net = nn.Sequential(\n\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 nn.Linear(1, 25), \u00a0# Mengubah jumlah neuron pada layer pertama\n\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 nn.Tanh(),\n\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 nn.Linear(25, 25), # Mengubah jumlah neuron pada hidden layer\n\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 nn.Tanh(),\n\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 nn.Linear(25, 1)\n\n\u00a0 \u00a0 \u00a0 \u00a0 )\n\n\u00a0 \u00a0 def forward(self, x):\n\n\u00a0 \u00a0 \u00a0 \u00a0 return self.net(x)\n\n# Compute Loss Function\n\ndef compute_loss(model, x, T0, T1):\n\n\u00a0 \u00a0 x = x.requires_grad_(True)\n\n\u00a0 \u00a0 T = model(x)\n\n\u00a0 \u00a0 # Compute derivatives\n\n\u00a0 \u00a0 dT_dx = torch.autograd.grad(T, x, grad_outputs=torch.ones_like(T), create_graph=True)&#91;0]\n\n\u00a0 \u00a0 d2T_dx2 = torch.autograd.grad(dT_dx, x, grad_outputs=torch.ones_like(dT_dx), create_graph=True)&#91;0]\n\n\u00a0 \u00a0 # Physics loss\n\n\u00a0 \u00a0 physics_loss = torch.mean(d2T_dx2**2)\n\n\u00a0 \u00a0 # Boundary conditions\n\n\u00a0 \u00a0 T_left = model(torch.tensor(&#91;&#91;0.0]]))\n\n\u00a0 \u00a0 T_right = model(torch.tensor(&#91;&#91;1.0]]))\n\n\u00a0 \u00a0 bc_loss = (T_left - T0)**2 + (T_right - T1)**2\n\n\u00a0 \u00a0 return physics_loss + bc_loss\n\n# Train PINN Function\n\ndef train_pinn(T0, T1, epochs=1000):\n\n\u00a0 \u00a0 model = PINN()\n\n\u00a0 \u00a0 optimizer = torch.optim.Adam(model.parameters(), lr=0.0099) \u00a0# Mengubah learning rate\n\n\u00a0 \u00a0 x = torch.linspace(0, 1, 120).reshape(-1, 1) \u00a0# Menambah jumlah titik diskritisasi\n\n\u00a0 \u00a0 for epoch in range(epochs):\n\n\u00a0 \u00a0 \u00a0 \u00a0 optimizer.zero_grad()\n\n\u00a0 \u00a0 \u00a0 \u00a0 loss = compute_loss(model, x, T0, T1)\n\n\u00a0 \u00a0 \u00a0 \u00a0 loss.backward()\n\n\u00a0 \u00a0 \u00a0 \u00a0 optimizer.step()\n\n\u00a0 \u00a0 \u00a0 \u00a0 if epoch % 100 == 0:\n\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 print(f\"Epoch {epoch}, Loss: {loss.item():.6f}\")\n\n\u00a0 \u00a0 return model\n\n# Plot Result Function\n\ndef plot_results(model, T0, T1):\n\n\u00a0 \u00a0 x = torch.linspace(0, 1, 120).reshape(-1, 1)\n\n\u00a0 \u00a0 with torch.no_grad():\n\n\u00a0 \u00a0 \u00a0 \u00a0 T_pred = model(x).numpy()\n\n\u00a0 \u00a0 \u00a0 \u00a0 x = x.numpy()\n\n\u00a0 \u00a0 \u00a0 \u00a0 T_analytical = T0 + (T1 - T0) * x\n\n\u00a0 \u00a0 plt.figure(figsize=(8, 6))\n\n\u00a0 \u00a0 plt.plot(x, T_pred, label=\"PINN Solution\", linewidth=2)\n\n\u00a0 \u00a0 plt.plot(x, T_analytical, \"--\", label=\"Analytical Solution\", linewidth=2)\n\n\u00a0 \u00a0 plt.xlabel(\"x\")\n\n\u00a0 \u00a0 plt.ylabel(\"Temperature\")\n\n\u00a0 \u00a0 plt.title(\"1D Steady-State Heat Conduction\")\n\n\u00a0 \u00a0 plt.legend()\n\n\u00a0 \u00a0 plt.grid(True)\n\n\u00a0 \u00a0 # Simpan grafik sebagai file jika tidak ada display\n\n\u00a0 \u00a0 plt.savefig(\"heat_conduction_result_updated.png\")\n\n\u00a0 \u00a0 print(\"Grafik telah disimpan sebagai 'heat_conduction_result_updated.png'\")\n\n# Main Execution\n\nif __name__ == \"__main__\":\n\n\u00a0 \u00a0 # Parameter default\n\n\u00a0 \u00a0 T0 = 100.0 \u00a0# Suhu di x=0\n\n\u00a0 \u00a0 T1 = 0.0 \u00a0 \u00a0# Suhu di x=1\n\n\u00a0 \u00a0 epochs = 1000\n\n\u00a0 \u00a0 # Latih model dan tampilkan hasil\n\n\u00a0 \u00a0 model = train_pinn(T0, T1, epochs)\n\n\u00a0 \u00a0 plot_results(model, T0, T1)<\/code><\/pre>\n<\/div>\n<\/div>\n\n\n\n<p>and here are the results:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"317\" height=\"221\" src=\"https:\/\/ccitonline.com\/wp\/wp-content\/uploads\/2025\/03\/image-851.png\" alt=\"\" class=\"wp-image-4874\" srcset=\"https:\/\/ccitonline.com\/wp\/wp-content\/uploads\/2025\/03\/image-851.png 317w, https:\/\/ccitonline.com\/wp\/wp-content\/uploads\/2025\/03\/image-851-300x209.png 300w\" sizes=\"auto, (max-width: 317px) 100vw, 317px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"873\" height=\"678\" src=\"https:\/\/ccitonline.com\/wp\/wp-content\/uploads\/2025\/03\/image-852.png\" alt=\"\" class=\"wp-image-4879\" srcset=\"https:\/\/ccitonline.com\/wp\/wp-content\/uploads\/2025\/03\/image-852.png 873w, https:\/\/ccitonline.com\/wp\/wp-content\/uploads\/2025\/03\/image-852-300x233.png 300w, https:\/\/ccitonline.com\/wp\/wp-content\/uploads\/2025\/03\/image-852-768x596.png 768w, https:\/\/ccitonline.com\/wp\/wp-content\/uploads\/2025\/03\/image-852-600x466.png 600w\" sizes=\"auto, (max-width: 873px) 100vw, 873px\" \/><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>Zahran Nadhif Afdallah Malik-2306155451 Physical-Informed Neural Networks (PINNs) are a type of neural network that incorporates physical laws or constraints into their architecture during training. This approach allows PINNs to solve complex problems involving real-world systems more effectively by leveraging known physical principles. Here\u2019s an overview of PINNs: In summary, PINNs combine the power of [&hellip;]<\/p>\n","protected":false},"author":16,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[26],"tags":[],"class_list":["post-4734","post","type-post","status-publish","format-standard","hentry","category-general"],"_links":{"self":[{"href":"https:\/\/ccitonline.com\/wp\/wp-json\/wp\/v2\/posts\/4734","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ccitonline.com\/wp\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ccitonline.com\/wp\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ccitonline.com\/wp\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/ccitonline.com\/wp\/wp-json\/wp\/v2\/comments?post=4734"}],"version-history":[{"count":1,"href":"https:\/\/ccitonline.com\/wp\/wp-json\/wp\/v2\/posts\/4734\/revisions"}],"predecessor-version":[{"id":4881,"href":"https:\/\/ccitonline.com\/wp\/wp-json\/wp\/v2\/posts\/4734\/revisions\/4881"}],"wp:attachment":[{"href":"https:\/\/ccitonline.com\/wp\/wp-json\/wp\/v2\/media?parent=4734"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ccitonline.com\/wp\/wp-json\/wp\/v2\/categories?post=4734"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ccitonline.com\/wp\/wp-json\/wp\/v2\/tags?post=4734"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}