{"id":15710,"date":"2020-09-05T11:23:33","date_gmt":"2020-09-05T15:23:33","guid":{"rendered":"https:\/\/www.techwalls.com\/?p=15710"},"modified":"2020-09-05T11:27:52","modified_gmt":"2020-09-05T15:27:52","slug":"tackle-overfitting-via-regularization-machine-learning-models","status":"publish","type":"post","link":"https:\/\/www.techwalls.com\/tackle-overfitting-via-regularization-machine-learning-models\/","title":{"rendered":"How to tackle overfitting via regularization in machine learning models"},"content":{"rendered":"\n<p>Overfitting is a common problem in machine learning, where a model performs well on training data but does not generalize well to unseen data (test data). If a model suffers from overfitting, we also say that the model has a high variance, which can be caused by having too many parameters, leading to a model that is too complex given the underlying data. Similarly, our model can also suffer from <strong>underfitting <\/strong>(high bias), which means that our model is not complex enough to capture the pattern in the training data well and therefore also suffers from low performance on unseen data.<\/p>\n\n\n\n<!--more-->\n\n\n\n<figure class=\"wp-block-image size-full\"><a href=\"https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/PML3-InfoG-4.jpeg\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"512\" src=\"https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/PML3-InfoG-4.jpeg\" alt=\"\" class=\"wp-image-15712\" srcset=\"https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/PML3-InfoG-4.jpeg 1024w, https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/PML3-InfoG-4-300x150.jpeg 300w, https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/PML3-InfoG-4-768x384.jpeg 768w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><\/figure>\n\n\n\n<p><em>This article is an excerpt from the book <\/em><a href=\"https:\/\/www.amazon.com\/Python-Machine-Learning-scikit-learn-TensorFlow\/dp\/1789955750\" rel=\"sponsored nofollow\"><em>Python Machine Learning, Third Edition<\/em><\/a><em> by Sebastian Raschka and Vahid Mirjalili. This book is updated for TensorFlow 2 and the latest additions to scikit-learn. This new third edition of the book is now available at 20% off now (offer is valid till 8th September 2020).<\/em><\/p>\n\n\n\n<p>The problems of overfitting and underfitting can be best illustrated by comparing a linear decision boundary to more complex, nonlinear decision boundaries, as shown in the following figure:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><a href=\"https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/B13208_03_07.jpeg\"><img loading=\"lazy\" decoding=\"async\" width=\"1300\" height=\"469\" src=\"https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/B13208_03_07.jpeg\" alt=\"\" class=\"wp-image-15713\" srcset=\"https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/B13208_03_07.jpeg 1300w, https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/B13208_03_07-300x108.jpeg 300w, https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/B13208_03_07-1024x369.jpeg 1024w, https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/B13208_03_07-768x277.jpeg 768w\" sizes=\"auto, (max-width: 1300px) 100vw, 1300px\" \/><\/a><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>The bias-variance tradeoff<\/strong><\/h2>\n\n\n\n<p>Often, researchers use the terms &#8220;bias&#8221; and &#8220;variance&#8221; or &#8220;biasvariance tradeoff&#8221; to describe the performance of a model\u2014that is, you may stumble upon talks, books, or articles where people say that a model has a &#8220;high variance&#8221; or &#8220;high bias.&#8221; So, what does that mean? In general, we might say that &#8220;high variance&#8221; is proportional to overfitting and &#8220;high bias&#8221; is proportional to underfitting.<\/p>\n\n\n\n<p>In the context of machine learning models, variance measures the consistency (or variability) of the model prediction for classifying a particular example if we retrain the model multiple times, for example, on different subsets of the training dataset. We can say that the model is sensitive to the randomness in the training data. In contrast, bias measures how far off the predictions are from the correct values in general if we rebuild the model multiple times on different training datasets; bias is the measure of the systematic error that is not due to randomness.<\/p>\n\n\n\n<p>One way of finding a good bias-variance tradeoff is to tune the complexity of the model via regularization. Regularization is a very useful method for handling collinearity (high correlation among features), filtering out noise from data, and eventually preventing overfitting.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Tackling overfitting via regularization<\/strong><strong><\/strong><\/h2>\n\n\n\n<p>The concept behind regularization is to introduce additional information (bias) to penalize extreme parameter (weight) values. The most common form of regularization is so-called <strong>L2 regularization <\/strong>(sometimes also called L2 shrinkage or weight decay), which can be written as follows:<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full is-resized\"><a href=\"https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/pic-1.jpg\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/pic-1.jpg\" alt=\"\" class=\"wp-image-15715\" width=\"244\" height=\"109\" srcset=\"https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/pic-1.jpg 564w, https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/pic-1-300x134.jpg 300w\" sizes=\"auto, (max-width: 244px) 100vw, 244px\" \/><\/a><\/figure><\/div>\n\n\n\n<p>Here, \ud835\udf06 is the so-called <strong>regularization parameter<\/strong>.<\/p>\n\n\n\n<p>Note: Regularization is another reason why feature scaling such as standardization is important. For regularization to work properly, we need to ensure that all our features are on comparable scales.<\/p>\n\n\n\n<p>The cost function for logistic regression can be regularized by adding a simple regularization term, which will shrink the weights during model training:<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full is-resized\"><a href=\"https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/pic-2.jpg\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/pic-2.jpg\" alt=\"\" class=\"wp-image-15716\" width=\"831\" height=\"121\" srcset=\"https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/pic-2.jpg 1332w, https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/pic-2-300x44.jpg 300w, https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/pic-2-1024x149.jpg 1024w, https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/pic-2-768x112.jpg 768w\" sizes=\"auto, (max-width: 831px) 100vw, 831px\" \/><\/a><\/figure><\/div>\n\n\n\n<p>Via the regularization parameter, \ud835\udf06, we can then control how well we fit the training<\/p>\n\n\n\n<p>data, while keeping the weights small. By increasing the value of \ud835\udf06, we increase the regularization strength.<\/p>\n\n\n\n<p>The parameter, C, that is implemented for the LogisticRegression class in scikitlearn<\/p>\n\n\n\n<p>comes from a convention in support vector machines. The term C is directly related to the regularization parameter, \ud835\udf06, which is its inverse. Consequently, decreasing the value of the inverse regularization parameter, C, means that we are increasing the regularization strength, which we can visualize by plotting the L2 regularization path for the two weight coefficients:<\/p>\n\n\n\n<p>&gt;&gt;&gt; weights, params = [], []<\/p>\n\n\n\n<p>&gt;&gt;&gt; for c in np.arange(-5, 5):<\/p>\n\n\n\n<p>&#8230; &nbsp;&nbsp;&nbsp;&nbsp;lr = LogisticRegression(C=10.**c, random_state=1,<\/p>\n\n\n\n<p>&#8230; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;solver=&#8217;lbfgs&#8217;, multi_class=&#8217;ovr&#8217;)<\/p>\n\n\n\n<p>&#8230; &nbsp;&nbsp;&nbsp;&nbsp;lr.fit(X_train_std, y_train)<\/p>\n\n\n\n<p>&#8230; &nbsp;&nbsp;&nbsp;&nbsp;weights.append(lr.coef_[1])<\/p>\n\n\n\n<p>&#8230; &nbsp;&nbsp;&nbsp;&nbsp;params.append(10.**c)<\/p>\n\n\n\n<p>&gt;&gt;&gt; weights = np.array(weights)<\/p>\n\n\n\n<p>&gt;&gt;&gt; plt.plot(params, weights[:, 0],<\/p>\n\n\n\n<p>&#8230; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;label=&#8217;petal length&#8217;)<\/p>\n\n\n\n<p>&gt;&gt;&gt; plt.plot(params, weights[:, 1], linestyle=&#8217;&#8211;&#8216;,<\/p>\n\n\n\n<p>&#8230; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;label=&#8217;petal width&#8217;)<\/p>\n\n\n\n<p>&gt;&gt;&gt; plt.ylabel(&#8216;weight coefficient&#8217;)<\/p>\n\n\n\n<p>&gt;&gt;&gt; plt.xlabel(&#8216;C&#8217;)<\/p>\n\n\n\n<p>&gt;&gt;&gt; plt.legend(loc=&#8217;upper left&#8217;)<\/p>\n\n\n\n<p>&gt;&gt;&gt; plt.xscale(&#8216;log&#8217;)<\/p>\n\n\n\n<p>&gt;&gt;&gt; plt.show()<\/p>\n\n\n\n<p>By executing the preceding code, we fitted 10 logistic regression models with different values for the inverse-regularization parameter, C. For the purposes of illustration, we only collected the weight coefficients of class 1 (here, the second class in the dataset: Iris-versicolor) versus all classifiers\u2014remember that we are using the OvR technique for multiclass classification.<\/p>\n\n\n\n<p>As we can see in the resulting plot, the weight coefficients shrink if we decrease parameter C, that is, if we increase the regularization strength:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><a href=\"https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/B13208_03_08.jpeg\"><img loading=\"lazy\" decoding=\"async\" width=\"1200\" height=\"811\" src=\"https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/B13208_03_08.jpeg\" alt=\"\" class=\"wp-image-15714\" srcset=\"https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/B13208_03_08.jpeg 1200w, https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/B13208_03_08-300x203.jpeg 300w, https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/B13208_03_08-1024x692.jpeg 1024w, https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/B13208_03_08-768x519.jpeg 768w\" sizes=\"auto, (max-width: 1200px) 100vw, 1200px\" \/><\/a><\/figure>\n\n\n\n<p>In this article, you learned about a machine learning algorithm that is used to tackle the overfitting problems. <em>Python Machine Learning, Third Edition<\/em> is a comprehensive guide to machine learning and deep learning with Python, scikit-learn, and TensorFlow 2 with a coverage on GANs and reinforcement learning.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">About the Authors<\/h3>\n\n\n\n<p><strong>Sebastian Raschka<\/strong> is an Assistant Professor of Statistics at the University of Wisconsin-Madison focusing on machine learning and deep learning research. Some of his recent research methods have been applied to solving problems in the field of biometrics for imparting privacy to face images. Other research focus areas include the development of methods related to model evaluation in machine learning, deep learning for ordinal targets, and applications of machine learning to computational biology. <strong>Vahid Mirjalili<\/strong> obtained his Ph.D. in mechanical engineering working on novel methods for large-scale, computational simulations of molecular structures. Currently, he is focusing his research efforts on applications of machine learning in various computer vision projects at the Department of Computer Science and Engineering at Michigan State University. He recently joined 3M Company as a research scientist, where he uses his expertise and applies state-of-the-art machine learning and deep learning techniques to solve real-world problems in various applications to make life better.<\/p>\n\n\n","protected":false},"excerpt":{"rendered":"<p>Overfitting is a common problem in machine learning, where a model performs well on training data but does not generalize well to unseen data (test data). If a model suffers from overfitting, we also say that the model has a high variance, which can be caused by having too many parameters, leading to a model [&hellip;]<\/p>\n","protected":false},"author":89,"featured_media":15712,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_genesis_hide_title":false,"_genesis_hide_breadcrumbs":false,"_genesis_hide_singular_image":false,"_genesis_hide_footer_widgets":false,"_genesis_custom_body_class":"","_genesis_custom_post_class":"","_genesis_layout":"","footnotes":""},"categories":[9],"tags":[52],"class_list":{"0":"post-15710","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-news","8":"tag-news-2","9":"entry"},"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v23.9 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>How to tackle overfitting via regularization in machine learning models - TechWalls<\/title>\n<meta name=\"description\" content=\"This article is an excerpt from the book Python Machine Learning, Third Edition by Sebastian Raschka and Vahid Mirjalili. This book is updated for TensorFlow 2 and the latest additions to scikit-learn. This new third edition of the book is now available at 20% off now (offer is valid till 8th September 2020).\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.techwalls.com\/tackle-overfitting-via-regularization-machine-learning-models\/\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Guest Authors\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.techwalls.com\/tackle-overfitting-via-regularization-machine-learning-models\/\",\"url\":\"https:\/\/www.techwalls.com\/tackle-overfitting-via-regularization-machine-learning-models\/\",\"name\":\"How to tackle overfitting via regularization in machine learning models - TechWalls\",\"isPartOf\":{\"@id\":\"https:\/\/www.techwalls.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.techwalls.com\/tackle-overfitting-via-regularization-machine-learning-models\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.techwalls.com\/tackle-overfitting-via-regularization-machine-learning-models\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/PML3-InfoG-4.jpeg\",\"datePublished\":\"2020-09-05T15:23:33+00:00\",\"dateModified\":\"2020-09-05T15:27:52+00:00\",\"author\":{\"@id\":\"https:\/\/www.techwalls.com\/#\/schema\/person\/440f216965cffca997e53e754f489c84\"},\"description\":\"This article is an excerpt from the book Python Machine Learning, Third Edition by Sebastian Raschka and Vahid Mirjalili. This book is updated for TensorFlow 2 and the latest additions to scikit-learn. This new third edition of the book is now available at 20% off now (offer is valid till 8th September 2020).\",\"breadcrumb\":{\"@id\":\"https:\/\/www.techwalls.com\/tackle-overfitting-via-regularization-machine-learning-models\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.techwalls.com\/tackle-overfitting-via-regularization-machine-learning-models\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.techwalls.com\/tackle-overfitting-via-regularization-machine-learning-models\/#primaryimage\",\"url\":\"https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/PML3-InfoG-4.jpeg\",\"contentUrl\":\"https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/PML3-InfoG-4.jpeg\",\"width\":1024,\"height\":512},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.techwalls.com\/tackle-overfitting-via-regularization-machine-learning-models\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.techwalls.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"News\",\"item\":\"https:\/\/www.techwalls.com\/news\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"How to tackle overfitting via regularization in machine learning models\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.techwalls.com\/#website\",\"url\":\"https:\/\/www.techwalls.com\/\",\"name\":\"TechWalls\",\"description\":\"Technology News | Gadget Reviews | Tutorials\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.techwalls.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.techwalls.com\/#\/schema\/person\/440f216965cffca997e53e754f489c84\",\"name\":\"Guest Authors\",\"url\":\"https:\/\/www.techwalls.com\/author\/guestauthor\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"How to tackle overfitting via regularization in machine learning models - TechWalls","description":"This article is an excerpt from the book Python Machine Learning, Third Edition by Sebastian Raschka and Vahid Mirjalili. This book is updated for TensorFlow 2 and the latest additions to scikit-learn. This new third edition of the book is now available at 20% off now (offer is valid till 8th September 2020).","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.techwalls.com\/tackle-overfitting-via-regularization-machine-learning-models\/","twitter_misc":{"Written by":"Guest Authors","Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.techwalls.com\/tackle-overfitting-via-regularization-machine-learning-models\/","url":"https:\/\/www.techwalls.com\/tackle-overfitting-via-regularization-machine-learning-models\/","name":"How to tackle overfitting via regularization in machine learning models - TechWalls","isPartOf":{"@id":"https:\/\/www.techwalls.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.techwalls.com\/tackle-overfitting-via-regularization-machine-learning-models\/#primaryimage"},"image":{"@id":"https:\/\/www.techwalls.com\/tackle-overfitting-via-regularization-machine-learning-models\/#primaryimage"},"thumbnailUrl":"https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/PML3-InfoG-4.jpeg","datePublished":"2020-09-05T15:23:33+00:00","dateModified":"2020-09-05T15:27:52+00:00","author":{"@id":"https:\/\/www.techwalls.com\/#\/schema\/person\/440f216965cffca997e53e754f489c84"},"description":"This article is an excerpt from the book Python Machine Learning, Third Edition by Sebastian Raschka and Vahid Mirjalili. This book is updated for TensorFlow 2 and the latest additions to scikit-learn. This new third edition of the book is now available at 20% off now (offer is valid till 8th September 2020).","breadcrumb":{"@id":"https:\/\/www.techwalls.com\/tackle-overfitting-via-regularization-machine-learning-models\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.techwalls.com\/tackle-overfitting-via-regularization-machine-learning-models\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.techwalls.com\/tackle-overfitting-via-regularization-machine-learning-models\/#primaryimage","url":"https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/PML3-InfoG-4.jpeg","contentUrl":"https:\/\/www.techwalls.com\/wp-content\/uploads\/2020\/09\/PML3-InfoG-4.jpeg","width":1024,"height":512},{"@type":"BreadcrumbList","@id":"https:\/\/www.techwalls.com\/tackle-overfitting-via-regularization-machine-learning-models\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.techwalls.com\/"},{"@type":"ListItem","position":2,"name":"News","item":"https:\/\/www.techwalls.com\/news\/"},{"@type":"ListItem","position":3,"name":"How to tackle overfitting via regularization in machine learning models"}]},{"@type":"WebSite","@id":"https:\/\/www.techwalls.com\/#website","url":"https:\/\/www.techwalls.com\/","name":"TechWalls","description":"Technology News | Gadget Reviews | Tutorials","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.techwalls.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.techwalls.com\/#\/schema\/person\/440f216965cffca997e53e754f489c84","name":"Guest Authors","url":"https:\/\/www.techwalls.com\/author\/guestauthor\/"}]}},"_links":{"self":[{"href":"https:\/\/www.techwalls.com\/wp-json\/wp\/v2\/posts\/15710","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.techwalls.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.techwalls.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.techwalls.com\/wp-json\/wp\/v2\/users\/89"}],"replies":[{"embeddable":true,"href":"https:\/\/www.techwalls.com\/wp-json\/wp\/v2\/comments?post=15710"}],"version-history":[{"count":0,"href":"https:\/\/www.techwalls.com\/wp-json\/wp\/v2\/posts\/15710\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.techwalls.com\/wp-json\/wp\/v2\/media\/15712"}],"wp:attachment":[{"href":"https:\/\/www.techwalls.com\/wp-json\/wp\/v2\/media?parent=15710"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.techwalls.com\/wp-json\/wp\/v2\/categories?post=15710"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.techwalls.com\/wp-json\/wp\/v2\/tags?post=15710"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}