Package org.apache.hadoop.fs

Examples of org.apache.hadoop.fs.FileContext

While URI names are very flexible, it requires knowing the name or address of the server. For convenience one often wants to access the default system in one's environment without knowing its name/address. This has an additional benefit that it allows one to change one's default fs (e.g. admin moves application from cluster1 to cluster2).

To facilitate this, Hadoop supports a notion of a default file system. The user can set his default file system, although this is typically set up for you in your environment via your default config. A default file system implies a default scheme and authority; slash-relative names (such as /for/bar) are resolved relative to that default FS. Similarly a user can also have working-directory-relative names (i.e. names not starting with a slash). While the working directory is generally in the same default FS, the wd can be in a different FS.

Hence Hadoop path names can be one of:

Relative paths with scheme (scheme:foo/bar) are illegal.

****The Role of the FileContext and configuration defaults****

The FileContext provides file namespace context for resolving file names; it also contains the umask for permissions, In that sense it is like the per-process file-related state in Unix system. These two properties

in general, are obtained from the default configuration file in your environment, (@see {@link Configuration}). No other configuration parameters are obtained from the default config as far as the file context layer is concerned. All file system instances (i.e. deployments of file systems) have default properties; we call these server side (SS) defaults. Operation like create allow one to select many properties: either pass them in as explicit parameters or use the SS properties.

The file system related SS defaults are

*** Usage Model for the FileContext class ***

Example 1: use the default config read from the $HADOOP_CONFIG/core.xml. Unspecified values come from core-defaults.xml in the release jar.

Example 2: Get a FileContext with a specific URI as the default FS Example 3: FileContext with local file system as the default Example 4: Use a specific config, ignoring $HADOOP_CONFIG Generally you should not need use a config unless you are doing

  }

  @Override
  public Configuration loadConfFile() throws IOException {
    Path confPath = getConfFile();
    FileContext fc = FileContext.getFileContext(confPath.toUri(), conf);
    Configuration jobConf = new Configuration(false);
    jobConf.addResource(fc.open(confPath), confPath.toString());
    return jobConf;
  }
View Full Code Here


    delService.init(new Configuration());
    delService.start();

    AbstractFileSystem spylfs =
      spy(FileContext.getLocalFSFileContext().getDefaultFileSystem());
    FileContext lfs = FileContext.getFileContext(spylfs, conf);
    doNothing().when(spylfs).mkdir(
        isA(Path.class), isA(FsPermission.class), anyBoolean());

    List<Path> localDirs = new ArrayList<Path>();
    String[] sDirs = new String[4];
    for (int i = 0; i < 4; ++i) {
      localDirs.add(lfs.makeQualified(new Path(basedir, i + "")));
      sDirs[i] = localDirs.get(i).toString();
    }
    conf.setStrings(YarnConfiguration.NM_LOCAL_DIRS, sDirs);
    LocalDirsHandlerService diskhandler = new LocalDirsHandlerService();
    diskhandler.init(conf);
View Full Code Here

  @SuppressWarnings("unchecked") // mocked generics
  public void testResourceRelease() throws Exception {
    Configuration conf = new YarnConfiguration();
    AbstractFileSystem spylfs =
      spy(FileContext.getLocalFSFileContext().getDefaultFileSystem());
    final FileContext lfs = FileContext.getFileContext(spylfs, conf);
    doNothing().when(spylfs).mkdir(
        isA(Path.class), isA(FsPermission.class), anyBoolean());

    List<Path> localDirs = new ArrayList<Path>();
    String[] sDirs = new String[4];
    for (int i = 0; i < 4; ++i) {
      localDirs.add(lfs.makeQualified(new Path(basedir, i + "")));
      sDirs[i] = localDirs.get(i).toString();
    }
    conf.setStrings(YarnConfiguration.NM_LOCAL_DIRS, sDirs);

    LocalizerTracker mockLocallilzerTracker = mock(LocalizerTracker.class);
View Full Code Here

  @SuppressWarnings("unchecked") // mocked generics
  public void testLocalizationHeartbeat() throws Exception {
    Configuration conf = new YarnConfiguration();
    AbstractFileSystem spylfs =
      spy(FileContext.getLocalFSFileContext().getDefaultFileSystem());
    final FileContext lfs = FileContext.getFileContext(spylfs, conf);
    doNothing().when(spylfs).mkdir(
        isA(Path.class), isA(FsPermission.class), anyBoolean());

    List<Path> localDirs = new ArrayList<Path>();
    String[] sDirs = new String[4];
    for (int i = 0; i < 4; ++i) {
      localDirs.add(lfs.makeQualified(new Path(basedir, i + "")));
      sDirs[i] = localDirs.get(i).toString();
    }
    conf.setStrings(YarnConfiguration.NM_LOCAL_DIRS, sDirs);

    DrainDispatcher dispatcher = new DrainDispatcher();
View Full Code Here

    public synchronized Path getConfFile() {
      return confFile;
    }
   
    public synchronized Configuration loadConfFile() throws IOException {
      FileContext fc = FileContext.getFileContext(confFile.toUri(), conf);
      Configuration jobConf = new Configuration(false);
      jobConf.addResource(fc.open(confFile), confFile.toString());
      return jobConf;
    }
View Full Code Here

    conf.set(CommonConfigurationKeys.FS_PERMISSIONS_UMASK_KEY,  "000");

    try {
      Path stagingPath = FileContext.getFileContext(conf).makeQualified(
          new Path(conf.get(MRJobConfig.MR_AM_STAGING_DIR)));
      FileContext fc=FileContext.getFileContext(stagingPath.toUri(), conf);
      if (fc.util().exists(stagingPath)) {
        LOG.info(stagingPath + " exists! deleting...");
        fc.delete(stagingPath, true);
      }
      LOG.info("mkdir: " + stagingPath);
      //mkdir the staging directory so that right permissions are set while running as proxy user
      fc.mkdir(stagingPath, null, true);
      //mkdir done directory as well
      String doneDir = JobHistoryUtils
          .getConfiguredHistoryServerDoneDirPrefix(conf);
      Path doneDirPath = fc.makeQualified(new Path(doneDir));
      fc.mkdir(doneDirPath, null, true);
    } catch (IOException e) {
      throw new YarnRuntimeException("Could not create staging directory. ", e);
    }
    conf.set(MRConfig.MASTER_ADDRESS, "test"); // The default is local because of
    // which shuffle doesn't happen
View Full Code Here

    assertEquals(targetQual, fc.getFileLinkStatus(linkAbs).getSymlink());
    // Check that the target is qualified using the file system of the
    // path used to access the link (if the link target was not specified
    // fully qualified, in that case we use the link target verbatim).
    if (!"file".equals(getScheme())) {
      FileContext localFc = FileContext.getLocalFSFileContext();
      Path linkQual = new Path(testURI().toString(), linkAbs);
      assertEquals(targetQual,
                   localFc.getFileLinkStatus(linkQual).getSymlink());
    }
   
    // Check getLinkTarget
    assertEquals(expectedTarget, fc.getLinkTarget(linkAbs));
   
    // Now read using all path types..
    fc.setWorkingDirectory(dir);   
    readFile(new Path("linkToFile"));
    readFile(linkAbs);
    // And fully qualified.. (NB: for local fs this is partially qualified)
    readFile(new Path(testURI().toString(), linkAbs));
    // And partially qualified..
    boolean failureExpected = "file".equals(getScheme()) ? false : true;
    try {
      readFile(new Path(getScheme()+"://"+testBaseDir1()+"/linkToFile"));
      assertFalse(failureExpected);
    } catch (Exception e) {
      assertTrue(failureExpected);
    }
   
    // Now read using a different file context (for HDFS at least)
    if (!"file".equals(getScheme())) {
      FileContext localFc = FileContext.getLocalFSFileContext();
      readFile(localFc, new Path(testURI().toString(), linkAbs));
    }
  }
View Full Code Here

    // Partially qualified paths are covered for local file systems
    // in the previous test.
    if ("file".equals(getScheme())) {
      return;
    }
    FileContext localFc = FileContext.getLocalFSFileContext();
   
    fc.createSymlink(fileWoHost, link, false);
    // Partially qualified path is stored
    assertEquals(fileWoHost, fc.getLinkTarget(linkQual));   
    // NB: We do not add an authority
    assertEquals(fileWoHost.toString(),
      fc.getFileLinkStatus(link).getSymlink().toString());
    assertEquals(fileWoHost.toString(),
      fc.getFileLinkStatus(linkQual).getSymlink().toString());
    // Ditto even from another file system
    assertEquals(fileWoHost.toString(),
      localFc.getFileLinkStatus(linkQual).getSymlink().toString());
    // Same as if we accessed a partially qualified path directly
    try {
      readFile(link);
      fail("DFS requires URIs with schemes have an authority");
    } catch (java.lang.RuntimeException e) {
View Full Code Here

        return amInfoList;
      }

      @Override
      public Configuration loadConfFile() throws IOException {
        FileContext fc = FileContext.getFileContext(configFile.toUri(), conf);
        Configuration jobConf = new Configuration(false);
        jobConf.addResource(fc.open(configFile), configFile.toString());
        return jobConf;
      }
    };
  }
View Full Code Here

  }

  @Override
  public Configuration loadConfFile() throws IOException {
    Path confPath = getConfFile();
    FileContext fc = FileContext.getFileContext(confPath.toUri(), conf);
    Configuration jobConf = new Configuration(false);
    jobConf.addResource(fc.open(confPath), confPath.toString());
    return jobConf;
  }
View Full Code Here

TOP

Related Classes of org.apache.hadoop.fs.FileContext

Copyright © 2018 www.massapicom. All rights reserved.
All source code are property of their respective owners. Java is a trademark of Sun Microsystems, Inc and owned by ORACLE Inc. Contact coftware#gmail.com.