Any way to access methods from individual stages in PySpark PipelineModel?
Asked Answered
S

2

8

I've created a PipelineModel for doing LDA in Spark 2.0 (via PySpark API):

def create_lda_pipeline(minTokenLength=1, minDF=1, minTF=1, numTopics=10, seed=42, pattern='[\W]+'):
    """
    Create a pipeline for running an LDA model on a corpus. This function does not need data and will not actually do
    any fitting until invoked by the caller.
    Args:
        minTokenLength:
        minDF: minimum number of documents word is present in corpus
        minTF: minimum number of times word is found in a document
        numTopics:
        seed:
        pattern: regular expression to split words

    Returns:
        pipeline: class pyspark.ml.PipelineModel
    """
    reTokenizer = RegexTokenizer(inputCol="text", outputCol="tokens", pattern=pattern, minTokenLength=minTokenLength)
    cntVec = CountVectorizer(inputCol=reTokenizer.getOutputCol(), outputCol="vectors", minDF=minDF, minTF=minTF)
    lda = LDA(k=numTopics, seed=seed, optimizer="em", featuresCol=cntVec.getOutputCol())
    pipeline = Pipeline(stages=[reTokenizer, cntVec, lda])
    return pipeline

I want to calculate the perplexity on a dataset using the trained model with the LDAModel.logPerplexity() method, so I tried running the following:

try:
    training = get_20_newsgroups_data(test_or_train='test')
    pipeline = create_lda_pipeline(numTopics=20, minDF=3, minTokenLength=5)
    model = pipeline.fit(training)  # train model on training data
    testing = get_20_newsgroups_data(test_or_train='test')
    perplexity = model.logPerplexity(testing)
    pprint(perplexity)

This just results in the following AttributeError:

'PipelineModel' object has no attribute 'logPerplexity'

I understand why this error happens, since the logPerplexity method belongs to LDAModel, not PipelineModel, but I am wondering if there is a way to access the method from that stage.

Spacetime answered 29/7, 2016 at 17:42 Comment(0)
Y
20

All transformers in the pipeline are stored in stages property. Extract stages, take the last one, and you're ready to go:

model.stages[-1].logPerplexity(testing)
Yakka answered 29/7, 2016 at 17:54 Comment(1)
For Scala use model.stages.lastAlleged
R
1

I faced the problem that pipeline.stages did not work - pipeline.stages was considered as a param. In that case, use

pipeline.getStages()

and you will have the list of your stages, just as pipeline.stages does in most of the cases.

Rutter answered 10/10, 2018 at 7:43 Comment(2)
It was probably considered as param because it was not a trained pipeline: stages of pyspark.ml.Pipeline can be accessed via .getStages() and stages of pyspark.ml.PipelineModel can be accessed via .stages.Arella
@NickTo Dude this should be an answerPaiz

© 2022 - 2024 — McMap. All rights reserved.