Palantir Data Engineer/Architect VLaunchU

  • company name VLaunchU
  • working location Office Location
  • job type Full Time

Experience: 3 - 3 years required

Pay:

Salary Information not included

Type: Full Time

Location: All India

Skills: cassandra, Palantir, Azure, POSTGRES, SQL, Scala, Python, Hadoop, aws, Storm, Kafka, C, ontology, spark, MongoDB, Java, NoSQL, objectexplorer, foundry ml, foundry tools, objecteditor, TypeScript, code workbook, code repository, ontologymanager, contour, sparkstreaming, pyspark

About VLaunchU

Job Description

You should have a minimum of 7+ years of experience in the field. It is essential to have experience in migrating Palantir to Azure/AWS. A good understanding and working knowledge of Foundry Tools such as Ontology, Contour, Object-explorer, Ontology-Manager, Object-editor using Actions/Typescript, Code workbook, Code Repository, and Foundry ML are required. You must possess at least 3+ years of experience in Palantir Foundry and have hands-on experience in migrating Foundry to Azure/AWS Cloud. Additionally, you should have a minimum of 5+ years of experience in a Data Engineer role, with proficiency in using tools like Hadoop, Spark, Kafka, etc. Familiarity with relational SQL and NoSQL databases, including Postgres and Cassandra/Mongo dB, is necessary. Experience with stream-processing systems like Storm, Spark-Streaming, and object-oriented/object function scripting languages such as Python, Java, C++, Scala, etc., is a must. You should have a proven track record in building and optimizing "big data" data pipelines, architectures, and data sets, with a minimum of 5+ years in Pyspark/Python. The role requires expertise in building processes that support data transformation, data structures, metadata, dependency, and workload management. The position is remote with a very high skill level expected. Skills in Cassandra, Palantir, Azure, Postgres, Object-explorer, Foundry ML, Foundry tools, SQL, Object-editor, Scala, Python, Typescript, Code workbook, Code Repository, Hadoop, AWS, Storm, Ontology-Manager, Kafka, Contour, C++, Spark-streaming, Ontology, Spark, MongoDB, Pyspark, Java, and NoSQL are essential for success in this role.,