![]() ![]() The common use case here is to get the user role so that you can add it to Security Predicates (row-level security works a bit differently in Einstein Analytics, maybe we should keep it in the backlog for a future article!)įor example, if we may want to have one field in the Opportunity with a concatenation of the product families from the Opportunity Products, we could just use flatten for that. Add something that you can group by, like the Stage field and measure something you can do calculations with, a number. It creates a new column in the Dataset with a new measure value from a dimension field, preserving the original dimension to ensure that the existing lenses and dashboards do not break if used elsewhere. Now, we’ll look at the remaining transformations and how to convert them once your data is ready as a DataSet!Īs the cryptic name tells us… you will be using this transformation if you need to change the dimension to measure. We also learnt how to enrich data using fields, as well as columns. So far, we have looked at the first eight transformations and how data is brought in via a wizard, from a local connection to Salesforce or an existing DataSet. Image ref: a fantastic series from the product team on YouTube to learn Einstein Analytics. ![]() In either of the two ‘Computes’ you can add multiple fields in one node. The ‘Partition By’, is used for slice/group for example, if you want to order the latest Opportunity for each Account by Account name. However, this time it acts across rows (over records). On the other hand, we have ComputeRelative, also as an expression. For example, we could look at the probability of our Opportunity and multiply it by the amount field. It allows you to look at the rest of the fields and make calculations. ComputeExpression is a formula a powerful way to create fields (only on one record). In our previous example, we could be using ‘append’ to bring the information of the Account into our initial flow of Opportunity Products & Opportunities.Īs you can see in the above screenshot, there are two sfdcDigests where we bring over Salesforce Objects (skip the Flatten node for a moment), then to combine data from one into columns of the other, we use that Augment node named 105.īoth these two have similar names but different functions. This one is about adding from one object child to another – basically, it is about adding columns. It combines rows from multiple Datasets into a single Dataset.Ī common use case is to create a Dataset about activities, then use computer expressions to add fields and when it appends them, you get all the fields in the same Dataset. It is here that you add one Dataset to one another. This transformation with a cool name loads existing and registered Datasets that have been created outside of the dataflow, maybe dataflow you already have in place.įor example, we could use an Edgemart to bring data from an existing Dataset with an Account balance from an external system that is already connected and registered as a Dataset to analytics. Use the sfdcDigest transformation to extract from your local Salesforce org.įor example, I could use this one in our previous example to find the account information, related to the opportunities. Use this to extract data synced from an external Salesforce org, or the data synced through an external connection. In a dataflow, a digest transformation extract is synced with the connected data. If you have connected to Salesforce, (local) you would have used the first one to fetch data from a Salesforce object. These two are the transformations that fetch data. ![]() It allows you to grab the objects, fields and relationships that you intend to use for your Dataset.įor example, here we are fetching Opportunity products with a couple of fields and their related Opportunities: ![]() To start with, its purpose is to build objects and relationships easily. This is the first one that you come across on the top pane on any Dataflow. Transformations for Analytics Dataflows DataSetBuilder Let’s cover the different transformations there are available for you and get to know why we would use them. This is exactly what I intend to explain today. You can find the list of transformations on the top pane on a Dataflow: Transformation can be defined as the process of converting data from one format or structure into another. A Dataset is a collection of data, think of it as a table or set of values, where every column represents a particular variable and each row corresponds to a given record of the Dataset in question. A Dataflow is a file that contains instructions to create Datasets, which you can use for Einstein Analytics data visualisations. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |