The post Large Action Models (LAMs) power the new wave of autonomous AI first appeared on Raju Alluri .

The post Large Action Models (LAMs) power the new wave of autonomous AI appeared first on Secvice.

]]>LLMs are primarily used to generate text based on user input. In contrast, LAMs take prompt responses to the next level by:

- Understanding the intended actions
- Orchestrating well-defined action sequences
- Accomplishing desired goals through these sequences

LLM-driven models excel in formal linguistic capabilities, generating coherent and contextually relevant text. In contrast, LAM-driven models require functional linguistic capabilities to produce actionable outputs. While LLMs are typically seen as single-step reasoning entities, LAMs rely on multi-step reasoning, enabling them to handle complex, interrelated tasks to achieve a goal.

Open-source LLMs often encounter challenges related to dataset quality, data standards (formats and environments), data diversity (incompleteness), and data reliability (unverified information). As a result, models built on these datasets may face limitations in scope, accuracy, and efficiency.

LAMs face similar dataset challenges as LLMs, but data processing phases of LAMs like quality validation and synthesis are even more critical due to the action-oriented nature of LAMs. The xLAM family of Large Action Models aims to enhance the performance of LAMs for autonomous AI agents while addressing many of these dataset limitations, making them accessible to a broader user community.

In this article, Salesforce‘s Silvio Savarese discusses how the LAMs herald the next wave of Autonomous AI. In one of his earlier articles, Silvio discusses LAMs and discusses the core challenge of LAMs:

…the world is not a static place, and any agent meant to interact with it must be flexible enough to adapt gracefully to changing circumstances

The LAMs landscape has been evolving to address this challenge, along with the earlier-mentioned data issues. At Dreamforce 2024, the unveiling of Agentforce marks a new generation of autonomous, actionable agents powered by LAMs, designed to overcome many of these initial obstacles. For more details, visit the event website.

The post Large Action Models (LAMs) power the new wave of autonomous AI first appeared on Raju Alluri .

The post Large Action Models (LAMs) power the new wave of autonomous AI appeared first on Secvice.

]]>The post K-Fold Cross Validation with multiple values of K first appeared on Raju Alluri .

The post K-Fold Cross Validation with multiple values of K appeared first on Secvice.

]]>A special case of this K-Fold Cross Validation method is Leave-One-Out Cross Validation (LOOCV), where *K* equals the number of data points, making each fold consist of a single sample. K-Fold Cross Validation strikes a balance between computational efficiency and the bias-variance tradeoff compared to LOOCV.

A simplified version of the K-Fold Cross Validation approach is as follows:

- Split the sample data (of size n) into K equal sets (folds)
- Use K-1 sets as training data, leaving one set out. Train the model
- Use the left out set as testing data and measure accuracy
- Repeat the previous two steps with each of the K sets as test data (and other K-1 sets as training data).
- By the end of K iterations, each of the K sets is used as test data and we have K measures of accuracy
- Compute average of the accuracies across all these K iterations

Key features of this K-fold cross validation method are

- Each data sample is used K-1 times for training
- Each data sample is used once for testing
- There is a prerequisite that the sample size in N is divisible for K, the number of folds

What is a good value of K? How many folds are best suited for reaching reasonable accuracy without too much computational cost? To get some insights into this, let us see what is the impact on accuracy for various values of K.

The Diabetes dataset is a widely used medical dataset containing diagnostic measurements used to predict the likelihood of diabetes onset. We take this dataset evaluate the accuracies for multiple values of K.

This Jupyter Notebook code outlines the steps to measure the accuracy of K-Fold Cross Validation method for multiple values of K. The Diabetes dataset of 768 samples is a good candidate for iterating thru multiple values of K.

The code computes the factors of the size of the dataset and then gets rid of 1 and the size, so that we go from factors 2 to n/2 of the dataset. In essence, K is iterated thru all factors of n from 2 to n/2, where n is the size of the dataset.

In each iteration, the accuracy is calculated and saved. At the end of all iterations, the accuracies are printed against values of K.

```
Accuracy of K-Fold based on K
K Accuracy
0 2.0 0.746094
1 3.0 0.734375
2 4.0 0.729167
3 6.0 0.743490
4 8.0 0.753906
5 12.0 0.756510
6 16.0 0.752604
7 24.0 0.743490
8 32.0 0.747396
9 48.0 0.752604
10 64.0 0.761719
11 96.0 0.765625
12 128.0 0.764323
13 192.0 0.765625
14 256.0 0.772135
15 384.0 0.772135
```

As we can see, the best values of accuracy are obtained (for this specific data) at high values of K. However, the accuracies around the values of 8 to 16 seem to be reasonable. Here is a graph that depicts the accuracies for values of K

You can download the code and try out a few variations including the choice of classifier (I used Decision Tree Classifier) and depth of the Decision Tree (I used tree depth of 2 to simplify the calculations)

Food for thought: what is your take on the tradeoffs between the accuracy at higher values of K and the compromise we need to make in terms of cost of computation?

The idea of testing K-Fold Cross Validation across multiple values of K came from one of the lab assignments from IIIT-Hyderabad’s flagship AIML program with Talentsprint.

The post K-Fold Cross Validation with multiple values of K first appeared on Raju Alluri .

The post K-Fold Cross Validation with multiple values of K appeared first on Secvice.

]]>The post K-Fold Cross Validation with multiple values of K first appeared on Raju Alluri .

The post K-Fold Cross Validation with multiple values of K appeared first on Secvice.

]]>A special case of this K-Fold Cross Validation method is Leave-One-Out Cross Validation (LOOCV), where *K* equals the number of data points, making each fold consist of a single sample. K-Fold Cross Validation strikes a balance between computational efficiency and the bias-variance tradeoff compared to LOOCV.

A simplified version of the K-Fold Cross Validation approach is as follows:

- Split the sample data (of size n) into K equal sets (folds)
- Use K-1 sets as training data, leaving one set out. Train the model
- Use the left out set as testing data and measure accuracy
- Repeat the previous two steps with each of the K sets as test data (and other K-1 sets as training data).
- By the end of K iterations, each of the K sets is used as test data and we have K measures of accuracy
- Compute average of the accuracies across all these K iterations

Key features of this K-fold cross validation method are

- Each data sample is used K-1 times for training
- Each data sample is used once for testing
- There is a prerequisite that the sample size in N is divisible for K, the number of folds

What is a good value of K? How many folds are best suited for reaching reasonable accuracy without too much computational cost? To get some insights into this, let us see what is the impact on accuracy for various values of K.

The Diabetes dataset is a widely used medical dataset containing diagnostic measurements used to predict the likelihood of diabetes onset. We take this dataset evaluate the accuracies for multiple values of K.

This Jupyter Notebook code outlines the steps to measure the accuracy of K-Fold Cross Validation method for multiple values of K. The Diabetes dataset of 768 samples is a good candidate for iterating thru multiple values of K.

The code computes the factors of the size of the dataset and then gets rid of 1 and the size, so that we go from factors 2 to n/2 of the dataset. In essence, K is iterated thru all factors of n from 2 to n/2, where n is the size of the dataset.

In each iteration, the accuracy is calculated and saved. At the end of all iterations, the accuracies are printed against values of K.

```
Accuracy of K-Fold based on K
K Accuracy
0 2.0 0.746094
1 3.0 0.734375
2 4.0 0.729167
3 6.0 0.743490
4 8.0 0.753906
5 12.0 0.756510
6 16.0 0.752604
7 24.0 0.743490
8 32.0 0.747396
9 48.0 0.752604
10 64.0 0.761719
11 96.0 0.765625
12 128.0 0.764323
13 192.0 0.765625
14 256.0 0.772135
15 384.0 0.772135
```

As we can see, the best values of accuracy are obtained (for this specific data) at high values of K. However, the accuracies around the values of 8 to 16 seem to be reasonable. Here is a graph that depicts the accuracies for values of K

You can download the code and try out a few variations including the choice of classifier (I used Decision Tree Classifier) and depth of the Decision Tree (I used tree depth of 2 to simplify the calculations)

Food for thought: what is your take on the tradeoffs between the accuracy at higher values of K and the compromise we need to make in terms of cost of computation?

The idea of testing K-Fold Cross Validation across multiple values of K came from one of the lab assignments from IIIT-Hyderabad’s flagship AIML program with Talentsprint.

The post K-Fold Cross Validation with multiple values of K first appeared on Raju Alluri .

The post K-Fold Cross Validation with multiple values of K appeared first on Secvice.

]]>The post K-Fold Cross Validation with multiple values of K first appeared on Raju Alluri .

The post K-Fold Cross Validation with multiple values of K appeared first on Secvice.

]]>A special case of this K-Fold Cross Validation method is Leave-One-Out Cross Validation (LOOCV), where *K* equals the number of data points, making each fold consist of a single sample. K-Fold Cross Validation strikes a balance between computational efficiency and the bias-variance tradeoff compared to LOOCV.

A simplified version of the K-Fold Cross Validation approach is as follows:

- Split the sample data (of size n) into K equal sets (folds)
- Use K-1 sets as training data, leaving one set out. Train the model
- Use the left out set as testing data and measure accuracy
- Repeat the previous two steps with each of the K sets as test data (and other K-1 sets as training data).
- By the end of K iterations, each of the K sets is used as test data and we have K measures of accuracy
- Compute average of the accuracies across all these K iterations

Key features of this K-fold cross validation method are

- Each data sample is used K-1 times for training
- Each data sample is used once for testing
- There is a prerequisite that the sample size in N is divisible for K, the number of folds

What is a good value of K? How many folds are best suited for reaching reasonable accuracy without too much computational cost? To get some insights into this, let us see what is the impact on accuracy for various values of K.

The Diabetes dataset is a widely used medical dataset containing diagnostic measurements used to predict the likelihood of diabetes onset. We take this dataset evaluate the accuracies for multiple values of K.

This Jupyter Notebook code outlines the steps to measure the accuracy of K-Fold Cross Validation method for multiple values of K. The Diabetes dataset of 768 samples is a good candidate for iterating thru multiple values of K.

The code computes the factors of the size of the dataset and then gets rid of 1 and the size, so that we go from factors 2 to n/2 of the dataset. In essence, K is iterated thru all factors of n from 2 to n/2, where n is the size of the dataset.

In each iteration, the accuracy is calculated and saved. At the end of all iterations, the accuracies are printed against values of K.

```
Accuracy of K-Fold based on K
K Accuracy
0 2.0 0.746094
1 3.0 0.734375
2 4.0 0.729167
3 6.0 0.743490
4 8.0 0.753906
5 12.0 0.756510
6 16.0 0.752604
7 24.0 0.743490
8 32.0 0.747396
9 48.0 0.752604
10 64.0 0.761719
11 96.0 0.765625
12 128.0 0.764323
13 192.0 0.765625
14 256.0 0.772135
15 384.0 0.772135
```

As we can see, the best values of accuracy are obtained (for this specific data) at high values of K. However, the accuracies around the values of 8 to 16 seem to be reasonable. Here is a graph that depicts the accuracies for values of K

You can download the code and try out a few variations including the choice of classifier (I used Decision Tree Classifier) and depth of the Decision Tree (I used tree depth of 2 to simplify the calculations)

Food for thought: what is your take on the tradeoffs between the accuracy at higher values of K and the compromise we need to make in terms of cost of computation?

The idea of testing K-Fold Cross Validation across multiple values of K came from one of the lab assignments from IIIT-Hyderabad’s flagship AIML program with Talentsprint.

The post K-Fold Cross Validation with multiple values of K first appeared on Raju Alluri .

The post K-Fold Cross Validation with multiple values of K appeared first on Secvice.

]]>The post K-Fold Cross Validation with multiple values of K first appeared on Raju Alluri .

The post K-Fold Cross Validation with multiple values of K appeared first on Secvice.

]]>*K* equals the number of data points, making each fold consist of a single sample. K-Fold Cross Validation strikes a balance between computational efficiency and the bias-variance tradeoff compared to LOOCV.

A simplified version of the K-Fold Cross Validation approach is as follows:

- Split the sample data (of size n) into K equal sets (folds)
- Use K-1 sets as training data, leaving one set out. Train the model
- Use the left out set as testing data and measure accuracy
- Compute average of the accuracies across all these K iterations

Key features of this K-fold cross validation method are

- Each data sample is used K-1 times for training
- Each data sample is used once for testing
- There is a prerequisite that the sample size in N is divisible for K, the number of folds

```
Accuracy of K-Fold based on K
K Accuracy
0 2.0 0.746094
1 3.0 0.734375
2 4.0 0.729167
3 6.0 0.743490
4 8.0 0.753906
5 12.0 0.756510
6 16.0 0.752604
7 24.0 0.743490
8 32.0 0.747396
9 48.0 0.752604
10 64.0 0.761719
11 96.0 0.765625
12 128.0 0.764323
13 192.0 0.765625
14 256.0 0.772135
15 384.0 0.772135
```

You can download the code and try out a few variations including the choice of classifier (I used Decision Tree Classifier) and depth of the Decision Tree (I used 2 to simplify the calculations)

The post K-Fold Cross Validation with multiple values of K first appeared on Raju Alluri .

The post K-Fold Cross Validation with multiple values of K appeared first on Secvice.

]]>The post Credential Holder Directories for Google Cloud Certified Professionals first appeared on Raju Alluri .

The post Credential Holder Directories for Google Cloud Certified Professionals appeared first on Secvice.

]]>At the time of this writing, there are about 2000 professionals listed in the Directory, covering handful of certifications offered by Google Cloud Platform. That number is going to grow within a short time. One good feature of this directory is that each professional can maintain a short profile and links to Twitter and LinkedIn, along with all Google Cloud Certifications the person holds.

The post Credential Holder Directories for Google Cloud Certified Professionals first appeared on Raju Alluri .

The post Credential Holder Directories for Google Cloud Certified Professionals appeared first on Secvice.

]]>The post Credential Holder Directories for Google Cloud Certified Professionals first appeared on Raju Alluri .

The post Credential Holder Directories for Google Cloud Certified Professionals appeared first on Secvice.

]]>At the time of this writing, there are about 2000 professionals listed in the Directory, covering handful of certifications offered by Google Cloud Platform. That number is going to grow within a short time. One good feature of this directory is that each professional can maintain a short profile and links to Twitter and LinkedIn, along with all Google Cloud Certifications the person holds.

At the time of this writing, there are about 2000 professionals listed in the Directory, covering handful of certifications offered by Google Cloud Platform. That number is going to grow within a short time. One good feature of this directory is that each professional can maintain a short profile and links to Twitter and LinkedIn, along with all Google Cloud Certifications the person holds.

The post Credential Holder Directories for Google Cloud Certified Professionals first appeared on Raju Alluri.

The post Credential Holder Directories for Google Cloud Certified Professionals first appeared on Raju Alluri.