Package org.apache.hadoop.fs

Examples of org.apache.hadoop.fs.FileContext

While URI names are very flexible, it requires knowing the name or address of the server. For convenience one often wants to access the default system in one's environment without knowing its name/address. This has an additional benefit that it allows one to change one's default fs (e.g. admin moves application from cluster1 to cluster2).

To facilitate this, Hadoop supports a notion of a default file system. The user can set his default file system, although this is typically set up for you in your environment via your default config. A default file system implies a default scheme and authority; slash-relative names (such as /for/bar) are resolved relative to that default FS. Similarly a user can also have working-directory-relative names (i.e. names not starting with a slash). While the working directory is generally in the same default FS, the wd can be in a different FS.

Hence Hadoop path names can be one of:

Relative paths with scheme (scheme:foo/bar) are illegal.

****The Role of the FileContext and configuration defaults****

The FileContext provides file namespace context for resolving file names; it also contains the umask for permissions, In that sense it is like the per-process file-related state in Unix system. These two properties

in general, are obtained from the default configuration file in your environment, (@see {@link Configuration}). No other configuration parameters are obtained from the default config as far as the file context layer is concerned. All file system instances (i.e. deployments of file systems) have default properties; we call these server side (SS) defaults. Operation like create allow one to select many properties: either pass them in as explicit parameters or use the SS properties.

The file system related SS defaults are

*** Usage Model for the FileContext class ***

Example 1: use the default config read from the $HADOOP_CONFIG/core.xml. Unspecified values come from core-defaults.xml in the release jar.

Example 2: Get a FileContext with a specific URI as the default FS Example 3: FileContext with local file system as the default Example 4: Use a specific config, ignoring $HADOOP_CONFIG Generally you should not need use a config unless you are doing

      dispatcher.getEventHandler().handle(
        new ContainerDiagnosticsUpdateEvent(containerId, message));
    } finally {
      // cleanup pid file if present
      if (pidFilePath != null) {
        FileContext lfs = FileContext.getLocalFSFileContext();
        lfs.delete(pidFilePath, false);
      }
    }
  }
View Full Code Here


   */
  @Test
  public void testSaveNamespace() throws IOException {
    MiniDFSCluster cluster = null;
    DistributedFileSystem fs = null;
    FileContext fc;
    try {
      Configuration conf = new HdfsConfiguration();
      cluster = new MiniDFSCluster.Builder(conf).numDataNodes(numDatanodes).format(true).build();
      cluster.waitActive();
      fs = (cluster.getFileSystem());
      fc = FileContext.getFileContext(cluster.getURI(0));

      // Saving image without safe mode should fail
      DFSAdmin admin = new DFSAdmin(conf);
      String[] args = new String[]{"-saveNamespace"};
      try {
        admin.run(args);
      } catch(IOException eIO) {
        assertTrue(eIO.getLocalizedMessage().contains("Safe mode should be turned ON"));
      } catch(Exception e) {
        throw new IOException(e);
      }
      // create new file
      Path file = new Path("namespace.dat");
      writeFile(fs, file, replication);
      checkFile(fs, file, replication);

      // create new link
      Path symlink = new Path("file.link");
      fc.createSymlink(file, symlink, false);
      assertTrue(fc.getFileLinkStatus(symlink).isSymlink());

      // verify that the edits file is NOT empty
      Collection<URI> editsDirs = cluster.getNameEditsDirs(0);
      for(URI uri : editsDirs) {
        File ed = new File(uri.getPath());
        assertTrue(new File(ed, "current/"
                            + NNStorage.getInProgressEditsFileName(1))
                   .length() > Integer.SIZE/Byte.SIZE);
      }

      // Saving image in safe mode should succeed
      fs.setSafeMode(SafeModeAction.SAFEMODE_ENTER);
      try {
        admin.run(args);
      } catch(Exception e) {
        throw new IOException(e);
      }
     
      // TODO: Fix the test to not require a hard-coded transaction count.
      final int EXPECTED_TXNS_FIRST_SEG = 13;
     
      // the following steps should have happened:
      //   edits_inprogress_1 -> edits_1-12  (finalized)
      //   fsimage_12 created
      //   edits_inprogress_13 created
      //
      for(URI uri : editsDirs) {
        File ed = new File(uri.getPath());
        File curDir = new File(ed, "current");
        LOG.info("Files in " + curDir + ":\n  " +
            Joiner.on("\n  ").join(curDir.list()));
        // Verify that the first edits file got finalized
        File originalEdits = new File(curDir,
                                      NNStorage.getInProgressEditsFileName(1));
        assertFalse(originalEdits.exists());
        File finalizedEdits = new File(curDir,
            NNStorage.getFinalizedEditsFileName(1, EXPECTED_TXNS_FIRST_SEG));
        GenericTestUtils.assertExists(finalizedEdits);
        assertTrue(finalizedEdits.length() > Integer.SIZE/Byte.SIZE);

        GenericTestUtils.assertExists(new File(ed, "current/"
                       + NNStorage.getInProgressEditsFileName(
                           EXPECTED_TXNS_FIRST_SEG + 1)));
      }
     
      Collection<URI> imageDirs = cluster.getNameDirs(0);
      for (URI uri : imageDirs) {
        File imageDir = new File(uri.getPath());
        File savedImage = new File(imageDir, "current/"
                                   + NNStorage.getImageFileName(
                                       EXPECTED_TXNS_FIRST_SEG));
        assertTrue("Should have saved image at " + savedImage,
            savedImage.exists());       
      }

      // restart cluster and verify file exists
      cluster.shutdown();
      cluster = null;

      cluster = new MiniDFSCluster.Builder(conf).numDataNodes(numDatanodes).format(false).build();
      cluster.waitActive();
      fs = (cluster.getFileSystem());
      checkFile(fs, file, replication);
      fc = FileContext.getFileContext(cluster.getURI(0));
      assertTrue(fc.getFileLinkStatus(symlink).isSymlink());
    } finally {
      if(fs != null) fs.close();
      cleanup(cluster);
      cluster = null;
    }
View Full Code Here

  private static final Log LOG = LogFactory.getLog(TestFSDownload.class);
 
  @AfterClass
  public static void deleteTestDir() throws IOException {
    FileContext fs = FileContext.getLocalFSFileContext();
    fs.delete(new Path("target", TestFSDownload.class.getSimpleName()), true);
  }
View Full Code Here

  @Test
  public void testDownload() throws IOException, URISyntaxException,
      InterruptedException {
    Configuration conf = new Configuration();
    conf.set(CommonConfigurationKeys.FS_PERMISSIONS_UMASK_KEY, "077");
    FileContext files = FileContext.getLocalFSFileContext(conf);
    final Path basedir = files.makeQualified(new Path("target",
      TestFSDownload.class.getSimpleName()));
    files.mkdir(basedir, null, true);
    conf.setStrings(TestFSDownload.class.getName(), basedir.toString());
   
    Map<LocalResource, LocalResourceVisibility> rsrcVis =
        new HashMap<LocalResource, LocalResourceVisibility>();

    Random rand = new Random();
    long sharedSeed = rand.nextLong();
    rand.setSeed(sharedSeed);
    System.out.println("SEED: " + sharedSeed);

    Map<LocalResource,Future<Path>> pending =
      new HashMap<LocalResource,Future<Path>>();
    ExecutorService exec = Executors.newSingleThreadExecutor();
    LocalDirAllocator dirs =
      new LocalDirAllocator(TestFSDownload.class.getName());
    int[] sizes = new int[10];
    for (int i = 0; i < 10; ++i) {
      sizes[i] = rand.nextInt(512) + 512;
      LocalResourceVisibility vis = LocalResourceVisibility.PUBLIC;
      switch (i%3) {
      case 1:
        vis = LocalResourceVisibility.PRIVATE;
        break;
      case 2:
        vis = LocalResourceVisibility.APPLICATION;
        break;      
      }
      Path p = new Path(basedir, "" + i);
      LocalResource rsrc = createFile(files, p, sizes[i], rand, vis);
      rsrcVis.put(rsrc, vis);
      Path destPath = dirs.getLocalPathForWrite(
          basedir.toString(), sizes[i], conf);
      FSDownload fsd =
          new FSDownload(files, UserGroupInformation.getCurrentUser(), conf,
              destPath, rsrc, new Random(sharedSeed));
      pending.put(rsrc, exec.submit(fsd));
    }

    try {
      for (Map.Entry<LocalResource,Future<Path>> p : pending.entrySet()) {
        Path localized = p.getValue().get();
        assertEquals(sizes[Integer.valueOf(localized.getName())], p.getKey()
            .getSize());

        FileStatus status = files.getFileStatus(localized.getParent());
        FsPermission perm = status.getPermission();
        assertEquals("Cache directory permissions are incorrect",
            new FsPermission((short)0755), perm);

        status = files.getFileStatus(localized);
        perm = status.getPermission();
        System.out.println("File permission " + perm +
            " for rsrc vis " + p.getKey().getVisibility().name());
        assert(rsrcVis.containsKey(p.getKey()));
        switch (rsrcVis.get(p.getKey())) {
View Full Code Here

  }
 
  @Test
  public void testDirDownload() throws IOException, InterruptedException {
    Configuration conf = new Configuration();
    FileContext files = FileContext.getLocalFSFileContext(conf);
    final Path basedir = files.makeQualified(new Path("target",
      TestFSDownload.class.getSimpleName()));
    files.mkdir(basedir, null, true);
    conf.setStrings(TestFSDownload.class.getName(), basedir.toString());
   
    Map<LocalResource, LocalResourceVisibility> rsrcVis =
        new HashMap<LocalResource, LocalResourceVisibility>();
 
    Random rand = new Random();
    long sharedSeed = rand.nextLong();
    rand.setSeed(sharedSeed);
    System.out.println("SEED: " + sharedSeed);

    Map<LocalResource,Future<Path>> pending =
      new HashMap<LocalResource,Future<Path>>();
    ExecutorService exec = Executors.newSingleThreadExecutor();
    LocalDirAllocator dirs =
      new LocalDirAllocator(TestFSDownload.class.getName());
    for (int i = 0; i < 5; ++i) {
      LocalResourceVisibility vis = LocalResourceVisibility.PUBLIC;
      switch (rand.nextInt()%3) {
      case 1:
        vis = LocalResourceVisibility.PRIVATE;
        break;
      case 2:
        vis = LocalResourceVisibility.APPLICATION;
        break;      
      }

      Path p = new Path(basedir, "dir" + i + ".jar");
      LocalResource rsrc = createJar(files, p, vis);
      rsrcVis.put(rsrc, vis);
      Path destPath = dirs.getLocalPathForWrite(
          basedir.toString(), conf);
      FSDownload fsd =
          new FSDownload(files, UserGroupInformation.getCurrentUser(), conf,
              destPath, rsrc, new Random(sharedSeed));
      pending.put(rsrc, exec.submit(fsd));
    }
   
    try {
     
      for (Map.Entry<LocalResource,Future<Path>> p : pending.entrySet()) {
        Path localized = p.getValue().get();
        FileStatus status = files.getFileStatus(localized);

        System.out.println("Testing path " + localized);
        assert(status.isDirectory());
        assert(rsrcVis.containsKey(p.getKey()));
       
View Full Code Here

    conf.set(CommonConfigurationKeys.FS_PERMISSIONS_UMASK_KEY,  "000");

    try {
      Path stagingPath = FileContext.getFileContext(conf).makeQualified(
          new Path(conf.get(MRJobConfig.MR_AM_STAGING_DIR)));
      FileContext fc=FileContext.getFileContext(stagingPath.toUri(), conf);
      if (fc.util().exists(stagingPath)) {
        LOG.info(stagingPath + " exists! deleting...");
        fc.delete(stagingPath, true);
      }
      LOG.info("mkdir: " + stagingPath);
      //mkdir the staging directory so that right permissions are set while running as proxy user
      fc.mkdir(stagingPath, null, true);
      //mkdir done directory as well
      String doneDir = JobHistoryUtils.getConfiguredHistoryServerDoneDirPrefix(conf);
      Path doneDirPath = fc.makeQualified(new Path(doneDir));
      fc.mkdir(doneDirPath, null, true);
    } catch (IOException e) {
      throw new YarnException("Could not create staging directory. ", e);
    }
    conf.set(MRConfig.MASTER_ADDRESS, "test"); // The default is local because of
                                             // which shuffle doesn't happen
View Full Code Here

  public void init(Configuration conf) {
    this.recordFactory = RecordFactoryProvider.getRecordFactory(conf);

    try {
      // TODO queue deletions here, rather than NM init?
      FileContext lfs = getLocalFileContext(conf);
      lfs.setUMask(new FsPermission((short)FsPermission.DEFAULT_UMASK));
      List<String> localDirs = dirsHandler.getLocalDirs();
      for (String localDir : localDirs) {
        // $local/usercache
        Path userDir = new Path(localDir, ContainerLocalizer.USERCACHE);
        lfs.mkdir(userDir, null, true);
        // $local/filecache
        Path fileDir = new Path(localDir, ContainerLocalizer.FILECACHE);
        lfs.mkdir(fileDir, null, true);
        // $local/nmPrivate
        Path sysDir = new Path(localDir, NM_PRIVATE_DIR);
        lfs.mkdir(sysDir, NM_PRIVATE_PERM, true);
      }

      List<String> logDirs = dirsHandler.getLogDirs();
      for (String logDir : logDirs) {
        lfs.mkdir(new Path(logDir), null, true);
      }
    } catch (IOException e) {
      throw new YarnException("Failed to initialize LocalizationService", e);
    }
View Full Code Here

    app.waitForState(Service.STATE.STOPPED);

    String jobhistoryDir = JobHistoryUtils
        .getHistoryIntermediateDoneDirForUser(conf);
   
    FileContext fc = null;
    try {
      fc = FileContext.getFileContext(conf);
    } catch (IOException ioe) {
      LOG.info("Can not get FileContext", ioe);
      throw (new Exception("Can not get File Context"));
    }
   
    if (numMaps == numSuccessfulMaps) {
      String summaryFileName = JobHistoryUtils
          .getIntermediateSummaryFileName(jobId);
      Path summaryFile = new Path(jobhistoryDir, summaryFileName);
      String jobSummaryString = getJobSummary(fc, summaryFile);
      Assert.assertNotNull(jobSummaryString);
      Assert.assertTrue(jobSummaryString.contains("resourcesPerMap=100"));
      Assert.assertTrue(jobSummaryString.contains("resourcesPerReduce=100"));

      Map<String, String> jobSummaryElements = new HashMap<String, String>();
      StringTokenizer strToken = new StringTokenizer(jobSummaryString, ",");
      while (strToken.hasMoreTokens()) {
        String keypair = strToken.nextToken();
        jobSummaryElements.put(keypair.split("=")[0], keypair.split("=")[1]);
      }

      Assert.assertEquals("JobId does not match", jobId.toString(),
          jobSummaryElements.get("jobId"));
      Assert.assertEquals("JobName does not match", "test",
          jobSummaryElements.get("jobName"));
      Assert.assertTrue("submitTime should not be 0",
          Long.parseLong(jobSummaryElements.get("submitTime")) != 0);
      Assert.assertTrue("launchTime should not be 0",
          Long.parseLong(jobSummaryElements.get("launchTime")) != 0);
      Assert.assertTrue("firstMapTaskLaunchTime should not be 0",
          Long.parseLong(jobSummaryElements.get("firstMapTaskLaunchTime")) != 0);
      Assert
      .assertTrue(
          "firstReduceTaskLaunchTime should not be 0",
          Long.parseLong(jobSummaryElements.get("firstReduceTaskLaunchTime")) != 0);
      Assert.assertTrue("finishTime should not be 0",
          Long.parseLong(jobSummaryElements.get("finishTime")) != 0);
      Assert.assertEquals("Mismatch in num map slots", numSuccessfulMaps,
          Integer.parseInt(jobSummaryElements.get("numMaps")));
      Assert.assertEquals("Mismatch in num reduce slots", numReduces,
          Integer.parseInt(jobSummaryElements.get("numReduces")));
      Assert.assertEquals("User does not match", System.getProperty("user.name"),
          jobSummaryElements.get("user"));
      Assert.assertEquals("Queue does not match", "default",
          jobSummaryElements.get("queue"));
      Assert.assertEquals("Status does not match", "SUCCEEDED",
          jobSummaryElements.get("status"));
    }

    JobHistory jobHistory = new JobHistory();
    jobHistory.init(conf);
    HistoryFileInfo fileInfo = jobHistory.getJobFileInfo(jobId);
    JobInfo jobInfo;
    long numFinishedMaps;
   
    synchronized(fileInfo) {
      Path historyFilePath = fileInfo.getHistoryFile();
      FSDataInputStream in = null;
      LOG.info("JobHistoryFile is: " + historyFilePath);
      try {
        in = fc.open(fc.makeQualified(historyFilePath));
      } catch (IOException ioe) {
        LOG.info("Can not open history file: " + historyFilePath, ioe);
        throw (new Exception("Can not open History File"));
      }
View Full Code Here

    String jobhistoryFileName = FileNameIndexUtils
        .getDoneFileName(jobIndexInfo);

    Path historyFilePath = new Path(jobhistoryDir, jobhistoryFileName);
    FSDataInputStream in = null;
    FileContext fc = null;
    try {
      fc = FileContext.getFileContext(conf);
      in = fc.open(fc.makeQualified(historyFilePath));
    } catch (IOException ioe) {
      LOG.info("Can not open history file: " + historyFilePath, ioe);
      throw (new Exception("Can not open History File"));
    }
View Full Code Here

    String jobhistoryFileName = FileNameIndexUtils
        .getDoneFileName(jobIndexInfo);

    Path historyFilePath = new Path(jobhistoryDir, jobhistoryFileName);
    FSDataInputStream in = null;
    FileContext fc = null;
    try {
      fc = FileContext.getFileContext(conf);
      in = fc.open(fc.makeQualified(historyFilePath));
    } catch (IOException ioe) {
      LOG.info("Can not open history file: " + historyFilePath, ioe);
      throw (new Exception("Can not open History File"));
    }
View Full Code Here

TOP

Related Classes of org.apache.hadoop.fs.FileContext

Copyright © 2018 www.massapicom. All rights reserved.
All source code are property of their respective owners. Java is a trademark of Sun Microsystems, Inc and owned by ORACLE Inc. Contact coftware#gmail.com.