Using Vertex AI for zero one and two three AI prediction
Recent newsHere is my documentation after learning the introduction of AI in courserERA.
read more(Comments)
The Cloud Natural Language API lets you extract entities from text, perform sentiment and syntactic analysis, and classify text into categories.
In this lab, you learn how to use the Natural Language API to analyze entities, sentiment, and syntax.
In this lab, you will learn how to:
Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources will be made available to you.
This hands-on lab lets you do the lab activities yourself in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials that you use to sign in and access Google Cloud for the duration of the lab.
To complete this lab, you need:
Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is the Lab Details panel with the following:
Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).
The lab spins up resources, and then opens another tab that shows the Sign in page.
Tip: Arrange the tabs in separate windows, side-by-side.
If necessary, copy the Username below and paste it into the Sign in dialog.
student-03[email protected]
You can also find the Username in the Lab Details panel.
Click Next.
Copy the Password below and paste it into the Welcome dialog.
pU6jGubGXV6Z
You can also find the Password in the Lab Details panel.
Click Next.
Click through the subsequent pages:
After a few moments, the Google Cloud console opens in this tab.
Since you use curl
to send a request to the Natural Language API, you must generate an API key to pass in your request URL.
To create an API key, in the Cloud Console, select Navigation menu > APIs & Services > Credentials.
Click Create credentials and select API key.
Copy the generated API key and click Close.
Click Check my progress to verify the objective.
Assessment Completed!
In order to perform next steps please connect to the instance provisioned for you via SSH.
Click Navigation menu > Compute Engine. You should see the provisioned linux instance, linux-instance
, in the VM instances list.
Click on the SSH button. You will be brought to an interactive shell.
In the command line, enter in the following, replacing <YOUR_API_KEY>
with the key you just copied:
export API_KEY=<YOUR_API_KEY>
The first Natural Language API method you use is analyzeEntities
. With this method, the API can extract entities (like people, places, and events) from text. To try it out the API's entity analysis, use the following sentence:
Joanne Rowling, who writes under the pen names J. K. Rowling and Robert Galbraith, is a British novelist and screenwriter who wrote the Harry Potter fantasy series.
You build your request to the Natural Language API in the file, request.json
.
request.json
:nano request.json
request.json
:{ "document":{ "type":"PLAIN_TEXT", "content":"Joanne Rowling, who writes under the pen names J. K. Rowling and Robert Galbraith, is a British novelist and screenwriter who wrote the Harry Potter fantasy series." }, "encodingType":"UTF8" }
In the request, you're telling the Natural Language API about the text being sent. Supported type values are PLAIN_TEXT
or HTML
. In content, you pass the text to send to the Natural Language API for analysis.
The Natural Language API also supports sending files stored in Cloud Storage for text processing. If you wanted to send a file from Cloud Storage, you would replace content
with gcsContentUri
and give it a value of the text file's uri in Cloud Storage.
encodingType
tells the API which type of text encoding to use when processing our text. The API will use this to calculate where specific entities appear in our text.
Click Check my progress to verify the objective.
Assessment Completed!
curl
command (all in one single command line):curl "https://language.googleapis.com/v1/documents:analyzeEntities?key=${API_KEY}" \ -s -X POST -H "Content-Type: application/json" --data-binary @request.json > result.json
cat result.json
The beginning of your response should look like this:
"content": "Joanne Rowling",
"beginOffset": 0
},
"type": "PROPER"
},
{
"text": {
"content": "Rowling",
"beginOffset": 53
},
"type": "PROPER"
},
{
"text": {
"content": "novelist",
"beginOffset": 96
},
"type": "COMMON"
},
{
"text": {
"content": "Robert Galbraith",
"beginOffset": 65
},
"type": "PROPER"
}
]
},
...
]
}
For each entity in the response, you get the entity type
, the associated Wikipedia URL if there is one, the salience
, and the indices of where this entity appeared in the text. Salience is a number in the [0,1] range that refers to the centrality of the entity to the text as a whole.
The Natural Language API can also recognize the same entity mentioned in different ways. Take a look at the mentions
list in the response: the API is able to tell that "Joanne Rowling", "Rowling", "novelist" and "Robert Galbriath" all point to the same thing.
Click Check my progress to verify the objective.
Assessment Completed!
In addition to extracting entities, the Natural Language API also lets you perform sentiment analysis on a block of text. This JSON request will include the same parameters as the request above, but this time change the text to include something with a stronger sentiment.
request.json
with the following, and feel free to replace the content
below with your own text:{ "document":{ "type":"PLAIN_TEXT", "content":"Harry Potter is the best book. I think everyone should read it." }, "encodingType": "UTF8" }
Press CTRL+X to exit nano, then Y to save the file, then ENTER to confirm.
Next you send the request to the API's analyzeSentiment
endpoint:
curl "https://language.googleapis.com/v1/documents:analyzeSentiment?key=${API_KEY}" \ -s -X POST -H "Content-Type: application/json" --data-binary @request.json
Your response should look like this:
{
"documentSentiment": {
"magnitude": 1.9,
"score": 0.9
},
"language": "en",
"sentences": [
{
"text": {
"content": "Harry Potter is the best book.",
"beginOffset": 0
},
"sentiment": {
"magnitude": 0.9,
"score": 0.9
}
},
{
"text": {
"content": "I think everyone should read it.",
"beginOffset": 31
},
"sentiment": {
"magnitude": 0.9,
"score": 0.9
}
}
]
}
Notice that you get two types of sentiment values: sentiment for the document as a whole, and sentiment broken down by sentence. The sentiment method returns two values:
score
- is a number from -1.0 to 1.0 indicating how positive or negative the statement is.magnitude
- is a number ranging from 0 to infinity that represents the weight of sentiment expressed in the statement, regardless of being positive or negative.Longer blocks of text with heavily weighted statements have higher magnitude values. The score for the first sentence is positive (0.7), whereas the score for the second sentence is neutral (0.1).
In addition to providing sentiment details on the entire text document, the Natural Language API can also break down sentiment by the entities in the text. Use this sentence as an example:
I liked the sushi but the service was terrible.
In this case, getting a sentiment score for the entire sentence as you did above might not be so useful. If this was a restaurant review and there were hundreds of reviews for the same restaurant, you'd want to know exactly which things people liked and didn't like in their reviews. Fortunately, the Natural Language API has a method that lets you get the sentiment for each entity in the text, called analyzeEntitySentiment
. Let's see how it works!
request.json
with the sentence below:{ "document":{ "type":"PLAIN_TEXT", "content":"I liked the sushi but the service was terrible." }, "encodingType": "UTF8" }
Press CTRL+X to exit nano, then Y to save the file, then ENTER to confirm.
Then call the analyzeEntitySentiment
endpoint with the following curl command:
curl "https://language.googleapis.com/v1/documents:analyzeEntitySentiment?key=${API_KEY}" \ -s -X POST -H "Content-Type: application/json" --data-binary @request.json
In the response you get back two entity objects: one for "sushi" and one for "service". Here's the full JSON response:
{
"entities": [
{
"name": "sushi",
"type": "CONSUMER_GOOD",
"metadata": {},
"salience": 0.51064336,
"mentions": [
{
"text": {
"content": "sushi",
"beginOffset": 12
},
"type": "COMMON",
"sentiment": {
"magnitude": 0,
"score": 0
}
}
],
"sentiment": {
"magnitude": 0,
"score": 0
}
},
{
"name": "service",
"type": "OTHER",
"metadata": {},
"salience": 0.48935664,
"mentions": [
{
"text": {
"content": "service",
"beginOffset": 26
},
"type": "COMMON",
"sentiment": {
"magnitude": 0.7,
"score": -0.7
}
}
],
"sentiment": {
"magnitude": 0.7,
"score": -0.7
}
}
],
"language": "en"
}
You can see that the score returned for "sushi" was a neutral score of 0, whereas "service" got a score of -0.7. Cool! You also may notice that there are two sentiment objects returned for each entity. If either of these terms were mentioned more than once, the API would return a different sentiment score and magnitude for each mention, along with an aggregate sentiment for the entity.
Use syntactic analysis, another of the Natural Language API's methods, to dive deeper into the linguistic details of the text. analyzeSyntax
extracts linguistic information, breaking up the given text into a series of sentences and tokens (generally, word boundaries), to provide further analysis on those tokens. For each word in the text, the API tells you the word's part of speech (noun, verb, adjective, etc.) and how it relates to other words in the sentence (Is it the root verb? A modifier?).
Try it out with a simple sentence. This JSON request will be similar to the ones above, with the addition of a features key. This tells the API to perform syntax annotation.
request.json
with the following:{ "document":{ "type":"PLAIN_TEXT", "content": "Joanne Rowling is a British novelist, screenwriter and film producer." }, "encodingType": "UTF8" }
Press CTRL+X to exit nano, then Y to save the file, then ENTER to confirm.
Then call the API's analyzeSyntax
method:
curl "https://language.googleapis.com/v1/documents:analyzeSyntax?key=${API_KEY}" \ -s -X POST -H "Content-Type: application/json" --data-binary @request.json
The response should return an object like the one below for each token in the sentence:
{
"text": {
"content": "is",
"beginOffset": 15
},
"partOfSpeech": {
"tag": "VERB",
"aspect": "ASPECT_UNKNOWN",
"case": "CASE_UNKNOWN",
"form": "FORM_UNKNOWN",
"gender": "GENDER_UNKNOWN",
"mood": "INDICATIVE",
"number": "SINGULAR",
"person": "THIRD",
"proper": "PROPER_UNKNOWN",
"reciprocity": "RECIPROCITY_UNKNOWN",
"tense": "PRESENT",
"voice": "VOICE_UNKNOWN"
},
"dependencyEdge": {
"headTokenIndex": 2,
"label": "ROOT"
},
"lemma": "be"
},
Let's break down the response:
partOfSpeech
tells you that "Joanne" is a noun.dependencyEdge
includes data that you can use to create a dependency parse tree of the text. Essentially, this is a diagram showing how words in a sentence relate to each other. A dependency parse tree for the sentence above would look like this:headTokenIndex
is the index of the token that has an arc pointing at "Joanne". Think of each token in the sentence as a word in an array.headTokenIndex
of 1 for "Joanne" refers to the word "Rowling", which it is connected to in the tree. The label NN
(short for noun compound modifier) describes the word's role in the sentence. "Joanne" modifies "Rowling", the subject of the sentence.lemma
is the canonical form of the word. For example, the words run, runs, ran, and running all have a lemma of run. The lemma value is useful for tracking occurrences of a word in a large piece of text over time.The Natural Language API also supports languages other than English (full list can be found in the Language Support Guide).
request.json
with a sentence in Japanese:{ "document":{ "type":"PLAIN_TEXT", "content":"日本のグーグルのオフィスは、東京の六本木ヒルズにあります" } }
Notice that you didn't tell the API which language the text is, it can automatically detect it!
analyzeEntities
endpoint:curl "https://language.googleapis.com/v1/documents:analyzeEntities?key=${API_KEY}" \ -s -X POST -H "Content-Type: application/json" --data-binary @request.json
And you get the following response:
{
"entities": [
{
"name": "日本",
"type": "LOCATION",
"metadata": {
"mid": "/m/03_3d",
"wikipedia_url": "https://en.wikipedia.org/wiki/Japan"
},
"salience": 0.23854347,
"mentions": [
{
"text": {
"content": "日本",
"beginOffset": 0
},
"type": "PROPER"
}
]
},
{
"name": "グーグル",
"type": "ORGANIZATION",
"metadata": {
"mid": "/m/045c7b",
"wikipedia_url": "https://en.wikipedia.org/wiki/Google"
},
"salience": 0.21155767,
"mentions": [
{
"text": {
"content": "グーグル",
"beginOffset": 9
},
"type": "PROPER"
}
]
},
...
]
"language": "ja"
}
The wikipedia URLs even point to the Japanese Wikipedia pages - so cool!
You've learned how to perform text analysis with the Cloud Natural Language API by extracting entities, analyzing sentiment, and doing syntax annotation. In this lab, you created a Natural Language API request and called the API with curl
, extracted entities and ran sentiment analysis on text with the Natural Language API, performed linguistic analysis on text, and created a Natural Language API request in a different language.
Here is my documentation after learning the introduction of AI in courserERA.
read moreThe Cloud Natural Language API lets you extract entities from text, perform sentiment and syntactic analysis, and classify text into categories.
read moreNull result in economic is when the output does not supporting your hypothesis
read moreHi, I want to raise the issue related to know whether your OLS is ok or not.
read moreThe **45-degree line** in economics and geometry refers to a line where the values on the x-axis and y-axis are equal at every point. It typically has a slope of 1, meaning that for every unit increase along the horizontal axis (x), there is an equal unit increase along the vertical axis (y). Here are a couple of contexts where the 45-degree line is significant:
read moreThe **hyperinflation in Hungary** in the aftermath of World War II (1945–1946) is considered the worst case of hyperinflation in recorded history. The reasons behind this extreme economic event are numerous, involving a combination of war-related devastation, political instability, massive fiscal imbalances, and mismanagement of monetary policy. Here's an in-depth look at the primary causes:
read more**Neutrality of money** is a concept in economics that suggests changes in the **money supply** only affect **nominal variables** (like prices, wages, and exchange rates) and have **no effect on real variables** (like real GDP, employment, or real consumption) in the **long run**.
read moreCollaboratively administrate empowered markets via plug-and-play networks. Dynamically procrastinate B2C users after installed base benefits. Dramatically visualize customer directed convergence without
Comments