Examples of FileContext

While URI names are very flexible, it requires knowing the name or address of the server. For convenience one often wants to access the default system in one's environment without knowing its name/address. This has an additional benefit that it allows one to change one's default fs (e.g. admin moves application from cluster1 to cluster2).

To facilitate this, Hadoop supports a notion of a default file system. The user can set his default file system, although this is typically set up for you in your environment via your default config. A default file system implies a default scheme and authority; slash-relative names (such as /for/bar) are resolved relative to that default FS. Similarly a user can also have working-directory-relative names (i.e. names not starting with a slash). While the working directory is generally in the same default FS, the wd can be in a different FS.

Hence Hadoop path names can be one of:

Relative paths with scheme (scheme:foo/bar) are illegal.

****The Role of the FileContext and configuration defaults****

The FileContext provides file namespace context for resolving file names; it also contains the umask for permissions, In that sense it is like the per-process file-related state in Unix system. These two properties

in general, are obtained from the default configuration file in your environment, (@see {@link Configuration}). No other configuration parameters are obtained from the default config as far as the file context layer is concerned. All file system instances (i.e. deployments of file systems) have default properties; we call these server side (SS) defaults. Operation like create allow one to select many properties: either pass them in as explicit parameters or use the SS properties.

The file system related SS defaults are

*** Usage Model for the FileContext class ***

Example 1: use the default config read from the $HADOOP_CONFIG/core.xml. Unspecified values come from core-defaults.xml in the release jar.

Example 2: Get a FileContext with a specific URI as the default FS Example 3: FileContext with local file system as the default Example 4: Use a specific config, ignoring $HADOOP_CONFIG Generally you should not need use a config unless you are doing
  • org.eclipse.php.internal.core.typeinference.context.FileContext
    This is a context for PHP file (outside of classes or functions) @author michael
  • org.simpleframework.http.resource.FileContext
    The FileContext provides an implementation of the Context object that provides a direct mapping from a request URI as defined in RFC 2616 to an OS specific target. This uses a File object to define the mapping for the request URI paths. Using a File object allows the FileContext to be easily used with both DOS and UNIX systems.

    This Indexer implementation uses an MIME database to obtain mappings for the getContentType method. The file used is acquired from the class path as a mapping from file extension to MIME type. This file can be modified if any additional types are required. However it is more advisable to simple extend this object and override the content type method. @author Niall Gallagher @see org.simpleframework.http.resource.FileIndexer

  • org.uengine.contexts.FileContext
    @author Jinyoung Jang
  • simple.http.serve.FileContext

  • Examples of org.apache.hadoop.fs.FileContext

        this.publicRsrc =
            new LocalResourcesTrackerImpl(null, dispatcher, true, conf);
        this.recordFactory = RecordFactoryProvider.getRecordFactory(conf);

        try {
          FileContext lfs = getLocalFileContext(conf);
          lfs.setUMask(new FsPermission((short)FsPermission.DEFAULT_UMASK));

          cleanUpLocalDir(lfs,delService);

          List<String> localDirs = dirsHandler.getLocalDirs();
          for (String localDir : localDirs) {
            // $local/usercache
            Path userDir = new Path(localDir, ContainerLocalizer.USERCACHE);
            lfs.mkdir(userDir, null, true);
            // $local/filecache
            Path fileDir = new Path(localDir, ContainerLocalizer.FILECACHE);
            lfs.mkdir(fileDir, null, true);
            // $local/nmPrivate
            Path sysDir = new Path(localDir, NM_PRIVATE_DIR);
            lfs.mkdir(sysDir, NM_PRIVATE_PERM, true);
          }

          List<String> logDirs = dirsHandler.getLogDirs();
          for (String logDir : logDirs) {
            lfs.mkdir(new Path(logDir), null, true);
          }
        } catch (IOException e) {
          throw new YarnRuntimeException("Failed to initialize LocalizationService", e);
        }
    View Full Code Here

    Examples of org.apache.hadoop.fs.FileContext

        private void writeCredentials(Path nmPrivateCTokensPath)
            throws IOException {
          DataOutputStream tokenOut = null;
          try {
            Credentials credentials = context.getCredentials();
            FileContext lfs = getLocalFileContext(getConfig());
            tokenOut =
                lfs.create(nmPrivateCTokensPath, EnumSet.of(CREATE, OVERWRITE));
            LOG.info("Writing credentials to the nmPrivate file "
                + nmPrivateCTokensPath.toString() + ". Credentials list: ");
            if (LOG.isDebugEnabled()) {
              for (Token<? extends TokenIdentifier> tk : credentials
                  .getAllTokens()) {
    View Full Code Here

    Examples of org.apache.hadoop.fs.FileContext

        private void writeCredentials(Path nmPrivateCTokensPath)
            throws IOException {
          DataOutputStream tokenOut = null;
          try {
            Credentials credentials = context.getCredentials();
            FileContext lfs = getLocalFileContext(getConfig());
            tokenOut =
                lfs.create(nmPrivateCTokensPath, EnumSet.of(CREATE, OVERWRITE));
            LOG.info("Writing credentials to the nmPrivate file "
                + nmPrivateCTokensPath.toString() + ". Credentials list: ");
            if (LOG.isDebugEnabled()) {
              for (Token<? extends TokenIdentifier> tk : credentials
                  .getAllTokens()) {
    View Full Code Here

    Examples of org.apache.hadoop.fs.FileContext

                    containerLogDir.toString())
                );
          }
          // /////////////////////////// End of variable expansion

          FileContext lfs = FileContext.getLocalFSFileContext();

          Path nmPrivateContainerScriptPath =
              dirsHandler.getLocalPathForWrite(
                  getContainerPrivateDir(appIdStr, containerIdStr) + Path.SEPARATOR
                      + CONTAINER_SCRIPT);
          Path nmPrivateTokensPath =
              dirsHandler.getLocalPathForWrite(
                  getContainerPrivateDir(appIdStr, containerIdStr)
                      + Path.SEPARATOR
                      + String.format(ContainerLocalizer.TOKEN_FILE_NAME_FMT,
                          containerIdStr));

          DataOutputStream containerScriptOutStream = null;
          DataOutputStream tokensOutStream = null;

          // Select the working directory for the container
          Path containerWorkDir =
              dirsHandler.getLocalPathForWrite(ContainerLocalizer.USERCACHE
                  + Path.SEPARATOR + user + Path.SEPARATOR
                  + ContainerLocalizer.APPCACHE + Path.SEPARATOR + appIdStr
                  + Path.SEPARATOR + containerIdStr,
                  LocalDirAllocator.SIZE_UNKNOWN, false);

          String pidFileSuffix = String.format(ContainerLaunch.PID_FILE_NAME_FMT,
              containerIdStr);

          // pid file should be in nm private dir so that it is not
          // accessible by users
          pidFilePath = dirsHandler.getLocalPathForWrite(
              ResourceLocalizationService.NM_PRIVATE_DIR + Path.SEPARATOR
              + pidFileSuffix);
          List<String> localDirs = dirsHandler.getLocalDirs();
          List<String> logDirs = dirsHandler.getLogDirs();

          List<String> containerLogDirs = new ArrayList<String>();
          for( String logDir : logDirs) {
            containerLogDirs.add(logDir + Path.SEPARATOR + relativeContainerLogDir);
          }

          if (!dirsHandler.areDisksHealthy()) {
            ret = ContainerExitStatus.DISKS_FAILED;
            throw new IOException("Most of the disks failed. "
                + dirsHandler.getDisksHealthReport());
          }

          try {
            // /////////// Write out the container-script in the nmPrivate space.
            List<Path> appDirs = new ArrayList<Path>(localDirs.size());
            for (String localDir : localDirs) {
              Path usersdir = new Path(localDir, ContainerLocalizer.USERCACHE);
              Path userdir = new Path(usersdir, user);
              Path appsdir = new Path(userdir, ContainerLocalizer.APPCACHE);
              appDirs.add(new Path(appsdir, appIdStr));
            }
            containerScriptOutStream =
              lfs.create(nmPrivateContainerScriptPath,
                  EnumSet.of(CREATE, OVERWRITE));

            // Set the token location too.
            environment.put(
                ApplicationConstants.CONTAINER_TOKEN_FILE_ENV_NAME,
                new Path(containerWorkDir,
                    FINAL_CONTAINER_TOKENS_FILE).toUri().getPath());
            // Sanitize the container's environment
            sanitizeEnv(environment, containerWorkDir, appDirs, containerLogDirs,
              localResources);
           
            // Write out the environment
            writeLaunchEnv(containerScriptOutStream, environment, localResources,
                launchContext.getCommands());
           
            // /////////// End of writing out container-script

            // /////////// Write out the container-tokens in the nmPrivate space.
            tokensOutStream =
                lfs.create(nmPrivateTokensPath, EnumSet.of(CREATE, OVERWRITE));
            Credentials creds = container.getCredentials();
            creds.writeTokenStorageToStream(tokensOutStream);
            // /////////// End of writing out container-tokens
          } finally {
            IOUtils.cleanup(LOG, containerScriptOutStream, tokensOutStream);
    View Full Code Here

    Examples of org.apache.hadoop.fs.FileContext

          dispatcher.getEventHandler().handle(
            new ContainerDiagnosticsUpdateEvent(containerId, message));
        } finally {
          // cleanup pid file if present
          if (pidFilePath != null) {
            FileContext lfs = FileContext.getLocalFSFileContext();
            lfs.delete(pidFilePath, false);
          }
        }
      }
    View Full Code Here

    Examples of org.apache.hadoop.fs.FileContext

        this.publicRsrc =
            new LocalResourcesTrackerImpl(null, dispatcher, true, conf);
        this.recordFactory = RecordFactoryProvider.getRecordFactory(conf);

        try {
          FileContext lfs = getLocalFileContext(conf);
          lfs.setUMask(new FsPermission((short)FsPermission.DEFAULT_UMASK));

          cleanUpLocalDir(lfs,delService);

          List<String> localDirs = dirsHandler.getLocalDirs();
          for (String localDir : localDirs) {
            // $local/usercache
            Path userDir = new Path(localDir, ContainerLocalizer.USERCACHE);
            lfs.mkdir(userDir, null, true);
            // $local/filecache
            Path fileDir = new Path(localDir, ContainerLocalizer.FILECACHE);
            lfs.mkdir(fileDir, null, true);
            // $local/nmPrivate
            Path sysDir = new Path(localDir, NM_PRIVATE_DIR);
            lfs.mkdir(sysDir, NM_PRIVATE_PERM, true);
          }

          List<String> logDirs = dirsHandler.getLogDirs();
          for (String logDir : logDirs) {
            lfs.mkdir(new Path(logDir), null, true);
          }
        } catch (IOException e) {
          throw new YarnRuntimeException("Failed to initialize LocalizationService", e);
        }
    View Full Code Here

    Examples of org.apache.hadoop.fs.FileContext

        conf.set(CommonConfigurationKeys.FS_PERMISSIONS_UMASK_KEY,  "000");

        try {
          Path stagingPath = FileContext.getFileContext(conf).makeQualified(
              new Path(conf.get(MRJobConfig.MR_AM_STAGING_DIR)));
          FileContext fc=FileContext.getFileContext(stagingPath.toUri(), conf);
          if (fc.util().exists(stagingPath)) {
            LOG.info(stagingPath + " exists! deleting...");
            fc.delete(stagingPath, true);
          }
          LOG.info("mkdir: " + stagingPath);
          //mkdir the staging directory so that right permissions are set while running as proxy user
          fc.mkdir(stagingPath, null, true);
          //mkdir done directory as well
          String doneDir = JobHistoryUtils
              .getConfiguredHistoryServerDoneDirPrefix(conf);
          Path doneDirPath = fc.makeQualified(new Path(doneDir));
          fc.mkdir(doneDirPath, null, true);
        } catch (IOException e) {
          throw new YarnException("Could not create staging directory. ", e);
        }
        conf.set(MRConfig.MASTER_ADDRESS, "test"); // The default is local because of
        // which shuffle doesn't happen
    View Full Code Here

    Examples of org.apache.hadoop.fs.FileContext

              conf.set(MRJobConfig.MR_AM_STAGING_DIR,
                  new File(conf.get(MRJobConfig.MR_AM_STAGING_DIR))
                      .getAbsolutePath());
            }
          }
          FileContext fc=FileContext.getFileContext(stagingPath.toUri(), conf);
          if (fc.util().exists(stagingPath)) {
            LOG.info(stagingPath + " exists! deleting...");
            fc.delete(stagingPath, true);
          }
          LOG.info("mkdir: " + stagingPath);
          //mkdir the staging directory so that right permissions are set while running as proxy user
          fc.mkdir(stagingPath, null, true);
          //mkdir done directory as well
          String doneDir = JobHistoryUtils.getConfiguredHistoryServerDoneDirPrefix(conf);
          Path doneDirPath = fc.makeQualified(new Path(doneDir));
          fc.mkdir(doneDirPath, null, true);
        } catch (IOException e) {
          throw new YarnRuntimeException("Could not create staging directory. ", e);
        }
        conf.set(MRConfig.MASTER_ADDRESS, "test"); // The default is local because of
                                                 // which shuffle doesn't happen
    View Full Code Here

    Examples of org.apache.hadoop.fs.FileContext

            value = expandEnvironment(value, containerLogDir);
            entry.setValue(value);
          }
          // /////////////////////////// End of variable expansion

          FileContext lfs = FileContext.getLocalFSFileContext();

          Path nmPrivateContainerScriptPath =
              dirsHandler.getLocalPathForWrite(
                  getContainerPrivateDir(appIdStr, containerIdStr) + Path.SEPARATOR
                      + CONTAINER_SCRIPT);
          Path nmPrivateTokensPath =
              dirsHandler.getLocalPathForWrite(
                  getContainerPrivateDir(appIdStr, containerIdStr)
                      + Path.SEPARATOR
                      + String.format(ContainerLocalizer.TOKEN_FILE_NAME_FMT,
                          containerIdStr));

          DataOutputStream containerScriptOutStream = null;
          DataOutputStream tokensOutStream = null;

          // Select the working directory for the container
          Path containerWorkDir =
              dirsHandler.getLocalPathForWrite(ContainerLocalizer.USERCACHE
                  + Path.SEPARATOR + user + Path.SEPARATOR
                  + ContainerLocalizer.APPCACHE + Path.SEPARATOR + appIdStr
                  + Path.SEPARATOR + containerIdStr,
                  LocalDirAllocator.SIZE_UNKNOWN, false);

          String pidFileSuffix = String.format(ContainerLaunch.PID_FILE_NAME_FMT,
              containerIdStr);

          // pid file should be in nm private dir so that it is not
          // accessible by users
          pidFilePath = dirsHandler.getLocalPathForWrite(
              ResourceLocalizationService.NM_PRIVATE_DIR + Path.SEPARATOR
              + pidFileSuffix);
          List<String> localDirs = dirsHandler.getLocalDirs();
          List<String> logDirs = dirsHandler.getLogDirs();

          List<String> containerLogDirs = new ArrayList<String>();
          for( String logDir : logDirs) {
            containerLogDirs.add(logDir + Path.SEPARATOR + relativeContainerLogDir);
          }

          if (!dirsHandler.areDisksHealthy()) {
            ret = ContainerExitStatus.DISKS_FAILED;
            throw new IOException("Most of the disks failed. "
                + dirsHandler.getDisksHealthReport());
          }

          try {
            // /////////// Write out the container-script in the nmPrivate space.
            List<Path> appDirs = new ArrayList<Path>(localDirs.size());
            for (String localDir : localDirs) {
              Path usersdir = new Path(localDir, ContainerLocalizer.USERCACHE);
              Path userdir = new Path(usersdir, user);
              Path appsdir = new Path(userdir, ContainerLocalizer.APPCACHE);
              appDirs.add(new Path(appsdir, appIdStr));
            }
            containerScriptOutStream =
              lfs.create(nmPrivateContainerScriptPath,
                  EnumSet.of(CREATE, OVERWRITE));

            // Set the token location too.
            environment.put(
                ApplicationConstants.CONTAINER_TOKEN_FILE_ENV_NAME,
                new Path(containerWorkDir,
                    FINAL_CONTAINER_TOKENS_FILE).toUri().getPath());
            // Sanitize the container's environment
            sanitizeEnv(environment, containerWorkDir, appDirs, containerLogDirs,
              localResources);
           
            // Write out the environment
            writeLaunchEnv(containerScriptOutStream, environment, localResources,
                launchContext.getCommands());
           
            // /////////// End of writing out container-script

            // /////////// Write out the container-tokens in the nmPrivate space.
            tokensOutStream =
                lfs.create(nmPrivateTokensPath, EnumSet.of(CREATE, OVERWRITE));
            Credentials creds = container.getCredentials();
            creds.writeTokenStorageToStream(tokensOutStream);
            // /////////// End of writing out container-tokens
          } finally {
            IOUtils.cleanup(LOG, containerScriptOutStream, tokensOutStream);
    View Full Code Here

    Examples of org.apache.hadoop.fs.FileContext

          dispatcher.getEventHandler().handle(
            new ContainerDiagnosticsUpdateEvent(containerId, message));
        } finally {
          // cleanup pid file if present
          if (pidFilePath != null) {
            FileContext lfs = FileContext.getLocalFSFileContext();
            lfs.delete(pidFilePath, false);
          }
        }
      }
    View Full Code Here
    TOP
    Copyright © 2018 www.massapi.com. All rights reserved.
    All source code are property of their respective owners. Java is a trademark of Sun Microsystems, Inc and owned by ORACLE Inc. Contact coftware#gmail.com.