hadoop java.lang.ClassCastException

许文强 发布于 2012/04/16 11:49
阅读 1K+
收藏 0

public static class PVMapper extends Mapper<Text, Text, Text, Text> {

public void map(Text key, Text value, org.apache.hadoop.mapreduce.Mapper<Text, Text, Text, Text>.Context context) throws IOException, InterruptedException {

}

public static class PVReducer extends Reducer<Text, Text, Text, LongWritable> {

public void reduce(Text key, Iterable<Text> values, org.apache.hadoop.mapreduce.Reducer<Text, Text, Text, LongWritable>.Context context) throws IOException, InterruptedException {

}

}

 

始终报错:

java.lang.ClassCastException: org.apache.hadoop.io.LongWritable cannot be cast to org.apache.hadoop.io.Text

如果把

public static class PVMapper extends Mapper<Text, Text, Text, Text>

改为 

public static class PVMapper extends Mapper<LongWritable, Text, Text, Text>

就报:java.io.IOException: Type mismatch in key from map: expected org.apache.hadoop.io.Text, recieved org.apache.hadoop.io.LongWritable

请问高手如何解决

加载中
0
紫海龟
紫海龟

在eclipse里面导入hadoop的源码打断点看看吧。从call stack中能看出调用map方法的位置。

还有楼主的inputformat设置是这么样的?还有hadoop的版本号是?

0
许文强
许文强

1.0.0

job.setInputFormatClass(CountClassNameTextInputFormat.class);

public class CountClassNameTextInputFormat extends TextInputFormat

{

 

    /* (non-Javadoc)

     * @see org.apache.hadoop.mapreduce.lib.input.TextInputFormat#isSplitable(org.apache.hadoop.mapreduce.JobContext, org.apache.hadoop.fs.Path)

     */

    @Override

    protected boolean isSplitable(JobContext context, Path file)

    {

        return false;

    }

}

返回顶部
顶部