2
回答
hadoop java.lang.ClassCastException
注册华为云得mate10,2.9折抢先购!>>>   

public static class PVMapper extends Mapper<Text, Text, Text, Text> {

public void map(Text key, Text value, org.apache.hadoop.mapreduce.Mapper<Text, Text, Text, Text>.Context context) throws IOException, InterruptedException {

}

public static class PVReducer extends Reducer<Text, Text, Text, LongWritable> {

public void reduce(Text key, Iterable<Text> values, org.apache.hadoop.mapreduce.Reducer<Text, Text, Text, LongWritable>.Context context) throws IOException, InterruptedException {

}

}

 

始终报错:

java.lang.ClassCastException: org.apache.hadoop.io.LongWritable cannot be cast to org.apache.hadoop.io.Text

如果把

public static class PVMapper extends Mapper<Text, Text, Text, Text>

改为 

public static class PVMapper extends Mapper<LongWritable, Text, Text, Text>

就报:java.io.IOException: Type mismatch in key from map: expected org.apache.hadoop.io.Text, recieved org.apache.hadoop.io.LongWritable

请问高手如何解决

举报
许文强
发帖于6年前 2回/1K+阅
共有2个答案 最后回答: 6年前

在eclipse里面导入hadoop的源码打断点看看吧。从call stack中能看出调用map方法的位置。

还有楼主的inputformat设置是这么样的?还有hadoop的版本号是?

1.0.0

job.setInputFormatClass(CountClassNameTextInputFormat.class);

public class CountClassNameTextInputFormat extends TextInputFormat

{

 

    /* (non-Javadoc)

     * @see org.apache.hadoop.mapreduce.lib.input.TextInputFormat#isSplitable(org.apache.hadoop.mapreduce.JobContext, org.apache.hadoop.fs.Path)

     */

    @Override

    protected boolean isSplitable(JobContext context, Path file)

    {

        return false;

    }

}

顶部