<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://bradleymonk.com/wiki/index.php?action=history&amp;feed=atom&amp;title=Python_Pubmed</id>
	<title>Python Pubmed - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://bradleymonk.com/wiki/index.php?action=history&amp;feed=atom&amp;title=Python_Pubmed"/>
	<link rel="alternate" type="text/html" href="https://bradleymonk.com/wiki/index.php?title=Python_Pubmed&amp;action=history"/>
	<updated>2026-04-09T17:41:00Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.41.0</generator>
	<entry>
		<id>https://bradleymonk.com/wiki/index.php?title=Python_Pubmed&amp;diff=1876&amp;oldid=prev</id>
		<title>Bradley Monk: Created page with &quot; ==Web Scrape Pubmed Using Python Script==  &lt;pre&gt; #!/usr/bin/env python ################## #  PYTHON SCRIPT #  PERFORM WEBSITE SCRAPE OF PUBMED #  PULL RELEVANT ARTICLE INFO F...&quot;</title>
		<link rel="alternate" type="text/html" href="https://bradleymonk.com/wiki/index.php?title=Python_Pubmed&amp;diff=1876&amp;oldid=prev"/>
		<updated>2013-08-15T23:29:32Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot; ==Web Scrape Pubmed Using Python Script==  &amp;lt;pre&amp;gt; #!/usr/bin/env python ################## #  PYTHON SCRIPT #  PERFORM WEBSITE SCRAPE OF PUBMED #  PULL RELEVANT ARTICLE INFO F...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;&lt;br /&gt;
==Web Scrape Pubmed Using Python Script==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/usr/bin/env python&lt;br /&gt;
##################&lt;br /&gt;
#  PYTHON SCRIPT&lt;br /&gt;
#  PERFORM WEBSITE SCRAPE OF PUBMED&lt;br /&gt;
#  PULL RELEVANT ARTICLE INFO FROM WEBPAGE&lt;br /&gt;
#  FORMAT CONTENT FOR WIKI TEMPLATE&lt;br /&gt;
#  REQUIRES &amp;#039;BeautifulSoup&amp;#039;&lt;br /&gt;
#  AUTHOR: BRADLEY MONK&lt;br /&gt;
#  LICENSE: GNU&lt;br /&gt;
#################&lt;br /&gt;
&lt;br /&gt;
import re&lt;br /&gt;
# re.compile(&amp;#039;&amp;lt;title&amp;gt;(.*)&amp;lt;/title&amp;gt;&amp;#039;)&lt;br /&gt;
import urllib2&lt;br /&gt;
from bs4 import BeautifulSoup&lt;br /&gt;
soup = BeautifulSoup(urllib2.urlopen(&amp;#039;http://www.ncbi.nlm.nih.gov/pubmed/10731148&amp;#039;).read())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;#################------------------#################&amp;quot;)&lt;br /&gt;
#------- pubmed authors ---------#&lt;br /&gt;
print(&amp;quot;{{Article|&amp;quot;)&lt;br /&gt;
div_tag = soup.find_all(&amp;#039;div&amp;#039;, attrs={&amp;quot;class&amp;quot;: &amp;quot;auths&amp;quot;})&lt;br /&gt;
&lt;br /&gt;
for div_tag.a in div_tag:&lt;br /&gt;
	diva = div_tag.a&lt;br /&gt;
&lt;br /&gt;
for string in diva.strings:&lt;br /&gt;
	auts = string&lt;br /&gt;
	print(string)&lt;br /&gt;
&lt;br /&gt;
#------- pubmed authors ---------#&lt;br /&gt;
print(auts)&lt;br /&gt;
&lt;br /&gt;
#------- pubmed year ------------#&lt;br /&gt;
print(&amp;quot;|&amp;quot;)&lt;br /&gt;
jouryear = soup.find_all(attrs={&amp;quot;class&amp;quot;: &amp;quot;cit&amp;quot;})&lt;br /&gt;
year = jouryear[0].get_text()&lt;br /&gt;
yearlength = len(year)&lt;br /&gt;
titleend = year.find(&amp;quot;.&amp;quot;)&lt;br /&gt;
year1 = titleend+2&lt;br /&gt;
year2 = year1+1&lt;br /&gt;
year3 = year2+1&lt;br /&gt;
year4 = year3+1&lt;br /&gt;
year5 = year4+1&lt;br /&gt;
print(year[year1:year5])&lt;br /&gt;
#------- pubmed year ------------#&lt;br /&gt;
&lt;br /&gt;
#------- pubmed journal ---------#&lt;br /&gt;
journal = soup.find_all(attrs={&amp;quot;class&amp;quot;: &amp;quot;cit&amp;quot;})&lt;br /&gt;
print(&amp;quot;|&amp;quot;)&lt;br /&gt;
print(journal[0].a.string)&lt;br /&gt;
#------- pubmed journal ---------#&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;- [http://domain.com/linktofile.pdf PDF]&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
#--------- pubmed PMID -----------#&lt;br /&gt;
PMID = soup.find_all(attrs={&amp;quot;class&amp;quot;: &amp;quot;rprtid&amp;quot;})&lt;br /&gt;
print(&amp;quot;|&amp;quot;)&lt;br /&gt;
print(PMID[0].dd.string)&lt;br /&gt;
#--------- pubmed PMID -----------#&lt;br /&gt;
&lt;br /&gt;
#------- pubmed title ---------#&lt;br /&gt;
title = soup.find_all(attrs={&amp;quot;class&amp;quot;: &amp;quot;rprt abstract&amp;quot;})&lt;br /&gt;
print(&amp;quot;|&amp;quot;)&lt;br /&gt;
print(title[0].h1.string)&lt;br /&gt;
#------- pubmed title ---------#&lt;br /&gt;
print(&amp;quot;}}&amp;quot;)&lt;br /&gt;
print(&amp;quot;{{ExpandBox|Expand to view experiment summary|&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
#------- pubmed abstract ---------#&lt;br /&gt;
abstract = soup.find_all(attrs={&amp;quot;class&amp;quot;: &amp;quot;abstr&amp;quot;})&lt;br /&gt;
print(abstract[0].p.string)&lt;br /&gt;
#------- pubmed abstract ---------#&lt;br /&gt;
print(&amp;quot;}}&amp;quot;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Result==&lt;br /&gt;
&lt;br /&gt;
{{Article|Hayashi Y Shi SH Esteban JA Piccini A Poncer JC Malinow R|2000|Science [http://domain.com/linktofile.pdf PDF]|10731148|Driving AMPA receptors into synapses by LTP and CaMKII: requirement for GluR1 and PDZ domain interaction}}&lt;br /&gt;
{{ExpandBox|Expand to view experiment summary|&lt;br /&gt;
To elucidate mechanisms that control and execute activity-dependent synaptic plasticity, alpha-amino-3-hydroxy-5-methyl-4-isoxazole propionate receptors (AMPA-Rs) with an electrophysiological tag were expressed in rat hippocampal neurons. Long-term potentiation (LTP) or increased activity of the calcium/calmodulin-dependent protein kinase II (CaMKII) induced delivery of tagged AMPA-Rs into synapses. This effect was not diminished by mutating the CaMKII phosphorylation site on the GluR1 AMPA-R subunit, but was blocked by mutating a predicted PDZ domain interaction site. These results show that LTP and CaMKII activity drive AMPA-Rs to synapses by a mechanism that requires the association between GluR1 and a PDZ domain protein.&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Bradley Monk</name></author>
	</entry>
</feed>