How pyspark modifies the value of a column in Dataframe

the data value is like this

< table > < thead > < tr > < th > Survived < / th > < th > age < / th > < / tr > < / thead > < tbody > < tr > < td > 0 < / td > < td > 22.0 < / td > < / tr > < tr > < td > 1 < / td > < td > 38.0 < / td > < / tr > < tr > < td > 1 < / td > < td > 26.0 < / td > < / tr > < tr > < td > 1 < / td > < td > 35.0 < / td > < / tr > < tr > < td > 0 < / td > < td > 35.0 < / td > < / tr > < tr > < td > 0 < / td > < td > null < / td > < / tr > < tr > < td > 0 < / td > < td > 54.0 < / td > < / tr > < tr > < td > 0 < / td > < td > 2.0 < / td > < / tr > < tr > < td > 1 < / td > < td > 27.0 < / td > < / tr > < tr > < td > 1 < / td > < td > 14.0 < / td > < / tr > < tr > < td > 1 < / td > < td > 4.0 < / td > < / tr > < tr > < td > 1 < / td > < td > 58.0 < / td > < / tr > < tr > < td > 0 < / td > < td > 20.0 < / td > < / tr > < tr > < td > 0 < / td > < td > 39.0 < / td > < / tr > < tr > < td > 0 < / td > < td > 14.0 < / td > < / tr > < tr > < td > 1 < / td > < td > 55.0 < / td > < / tr > < tr > < td > 0 < / td > < td > 2.0 < / td > < / tr > < tr > < td > 1 < / td > < td > null < / td > < / tr > < tr > < td > 0 < / td > < td > 31.0 < / td > < / tr > < tr > < td > 1 < / td > < td > null < / td > < / tr > < / tbody > < / table >
age_interval = [(lower, upper) for lower, upper in zip(range(0, 96, 5), range(5, 101, 5))]
def age_partition(age):
    """  """
    for lower, upper in age_interval:
        if age is None:
            return "None"
        elif lower <= age <= upper:
            return f"({lower}, {upper})"

I want to modify the age column, such as changing 22.0 to (20,30) and 38 to (30,40)
. The above code is a function that modifies the age value

.

how should I modify the age column?

Apr.30,2021

import pandas as pd
df = pd.read_csv('xxx.csv', header=0, encoding='utf-8')

age_interval = [(lower, upper) for lower, upper in zip(range(0, 96, 5), range(5, 101, 5))]
def age_partition(age):
    """  """
    for lower, upper in age_interval:
        if age is None:
            return "None"
        elif lower <= age <= upper:
            return f"({lower}, {upper})"

df['new_col'] = df.age.apply(age_partition)
Menu