11+ EDIFACT Jobs in Delhi, NCR and Gurgaon | EDIFACT Job openings in Delhi, NCR and Gurgaon
Apply to 11+ EDIFACT Jobs in Delhi, NCR and Gurgaon on CutShort.io. Explore the latest EDIFACT Job opportunities across top companies like Google, Amazon & Adobe.

•Sound knowledge of EDI standards like ANSI X12, EDIFACT, TRADACOM, XML, Rosetta Net etc.
•Good understanding of XSLT, XSD and XML parsing processes.
•Having WebMethods knowledge in EDI domain will be huge plus.
•Good Knowledge of scripting languages like JavaScript, Perl etc.
•Good knowledge of file sharing protocols like AS2, AS3, FTP/SFTP, HTTP/HTTPS etc.
•Experience in EDI Mapping to/from following platforms/technologies (not limited to)
oSAP (IDOC)
oMainframe.
oCSV/XML/JSON/Flat File e.Word/Excel/PDF
•Good working knowledge SQL and relation databases (Oracle/MySQL etc.).
Required Skills:
- Experience in systems administration, SRE or DevOps focused role
- Experience in handling production support (on-call)
- Good understanding of the Linux operating system and networking concepts.
- Demonstrated competency with the following AWS services: ECS, EC2, EBS, EKS, S3, RDS, ELB, IAM, Lambda.
- Experience with Docker containers and containerization concepts
- Experience with managing and scaling Kubernetes clusters in a production environment
- Experience building scalable infrastructure in AWS with Terraform.
- Strong knowledge of Protocol-level such as HTTP/HTTPS, SMTP, DNS, and LDAP
- Experience monitoring production systems
- Expertise in leveraging Automation / DevOps principles, experience with operational tools, and able to apply best practices for infrastructure and software deployment (Ansible).
- HAProxy, Nginx, SSH, MySQL configuration and operation experience
- Ability to work seamlessly with software developers, QA, project managers, and business development
- Ability to produce and maintain written documentation
Responsibilities :
• Full cycle tech-recruitment - Source and hire the Technical Team through sourcing,
databases, networking sites, Social media, direct approach, referrals, etc.
• Manage the hiring process and provide a high-touch experience for the candidate
from the application to offer.
• Should be able to hire SDE1, SDE2, SDE3, Tech Leads, PM’s etc
• Provide analytical and well documented recruiting MIS reports to the rest of the
team,
• Act as a point of contact and build influential candidate relationships during the
selection process
Must have Skills:
• Minimum of 3-years & up to 8 years of Talent Acquisition experience
• Prior experience of min 3 years in tech-hiring for start-ups/product based companies
• Must have worked in a fast paced Startup
• Prior experience in technical Hiring -either as in-house recruiter or from a
Recruitment Consulting company
• Ability to conduct different types of interviews- structured, competency-based,
Psychometric, stress etc.
• Possess good negotiation skills
• Hands on experience with various selection processes -Video & Telephonic
interviewing, background reference check, etc.
• Prior knowledge of various recruitment management systems
• Good understanding of technology and technical systems
• Possess excellent communication and interpersonal skills and ability to communicate
to all levels and functions both internally and externally.
• Attitude to work hard & smart


Job Description:
As an Azure Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Azure platform. You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic. Your expertise in Azure Databricks, Python, SQL, Azure Data Factory (ADF), PySpark, and Scala will be essential for performing the following key responsibilities:
Designing and developing data pipelines: You will design and implement scalable and efficient data pipelines using Azure Databricks, PySpark, and Scala. This includes data ingestion, data transformation, and data loading processes.
Data modeling and database design: You will design and implement data models to support efficient data storage, retrieval, and analysis. This may involve working with relational databases, data lakes, or other storage solutions on the Azure platform.
Data integration and orchestration: You will leverage Azure Data Factory (ADF) to orchestrate data integration workflows and manage data movement across various data sources and targets. This includes scheduling and monitoring data pipelines.
Data quality and governance: You will implement data quality checks, validation rules, and data governance processes to ensure data accuracy, consistency, and compliance with relevant regulations and standards.
Performance optimization: You will optimize data pipelines and queries to improve overall system performance and reduce processing time. This may involve tuning SQL queries, optimizing data transformation logic, and leveraging caching techniques.
Monitoring and troubleshooting: You will monitor data pipelines, identify performance bottlenecks, and troubleshoot issues related to data ingestion, processing, and transformation. You will work closely with cross-functional teams to resolve data-related problems.
Documentation and collaboration: You will document data pipelines, data flows, and data transformation processes. You will collaborate with data scientists, analysts, and other stakeholders to understand their data requirements and provide data engineering support.
Skills and Qualifications:
Strong experience with Azure Databricks, Python, SQL, ADF, PySpark, and Scala.
Proficiency in designing and developing data pipelines and ETL processes.
Solid understanding of data modeling concepts and database design principles.
Familiarity with data integration and orchestration using Azure Data Factory.
Knowledge of data quality management and data governance practices.
Experience with performance tuning and optimization of data pipelines.
Strong problem-solving and troubleshooting skills related to data engineering.
Excellent collaboration and communication skills to work effectively in cross-functional teams.
Understanding of cloud computing principles and experience with Azure services.

Dare2Compete is looking for MEAN, MERN, and LAMP Stack Developers. Developer responsibilities include building our application from concept all the way to completion from the bottom up, fashioning everything from the home page to site layout and function.
Responsibilities of the Candidate:
- Write well-designed, testable, efficient code by using the best software development practices
- Integrate data from various back-end services and databases
- Gather and refine specifications and requirements based on technical needs
- Be responsible for maintaining, expanding, and scaling our products
- Stay plugged into emerging technologies/industry trends and apply them into operations and activities
- End-to-end management and coding of all our products and services
- To make products modular, flexible, scalable and robust
Our Tech Stack:
- AngularJS 10
- React
- PHP Laravel
- Mongo DB
- Express.js
- NodeJS
- MYSQL 8
- NoSQL DB
- Linux OS
- Apache
- Amazon AWS services – EC2, WAF, EBS, SNS, SES, Lambda, Fargate, etc.
- The whole ecosystem of AWS
- And many more…
Required Experience, Skills, and Qualifications:
- Must have at least 1 year of experience to a maximum of 10 years of experience in the technologies that we work with
- Proven working experience in programming – Full Stack
- Top-notch programming and analytical skills
- Must know and have experience in AngularJS 2 onwards
- A solid understanding of how web applications work including security, session management, and best development practices
- Adequate knowledge of relational database systems, Object-Oriented Programming and web application development
- Ability to work and thrive in a fast-paced environment, learn rapidly and master diverse web technologies and techniques
- B.Tech in Computer Science or a related field or equivalent
Salary: As per industry benchmarks. This won’t be a restriction for the right candidate.

What you will do:
- Leveraging your deep knowledge to provide technical leadership to take projects from zero to completion
- Architecting, building and maintaining scalable data pipelines and accessing patterns related to permissions and security
- Researching, evaluating and utilising new technologies/tools/frameworks centred around high-volume data processing
- Involving in building and deploying large scale data processing pipelines in a production environment
- Working with data scientists and other engineers to develop data pipelines for model development and productization
- Identifying gaps and implementing solutions for data security, quality and automation of processes
- Providing inputs on right tool options and model designs for use cases
- Identifying gaps and implementing solutions for data security, quality and automation of processes
- Designing scalable implementations of the models developed by our Data Scientists
Desired Candidate Profile
What you need to have:- 3+ years strong programming experience in PySpark and Python
- Knowledge in Python, SQL, Spark (Pyspark)
- Exposure to AWS/ Azure cloud tools and services like S3 Athena, Apache Nifi, Apache Airflow
- Analytical and problem-solving skills
- Knowledge in Scrum & code sharing Tech: Git, Jira
- Experience related to processing frameworks such as Spark, Spark Streaming, Hive, Sqoop, Kafka etc
- Deep understanding of measuring and ensuring data quality at scale and the required tooling to monitor and optimise the performance of our data pipelines
- Experience building data pipelines and data-centric applications using distributed storage platforms and shipping data production pipelines sourcing data from a diverse array of sources
We are storytellers for all that's digital. Whether its digital marketing or social media marketing, whether its social advertising or search engine marketing, we even build an end to end software products such as websites, mobile apps, customized software, etc. to help startups, ideas, struggling brands or companies, new brands or companies. Our services encompass a wide range of digital solutions like branding, social media marketing, online advertising, graphic design, content marketing, software development, app & web development, user interface & experience design, online research. We have some amazing & exciting projects for young audiences to work on. Our team is full of passionate hustlers executing at a high speed. Looking forward to having more excited & proactive individuals onboard.
Intern's day to day responsibilities include:
1. Work on digital marketing campaign planning
2. Handle efficient advertising campaign setup across platforms (Google Adwords, Facebook Adverts)
3. Work on LinkedIn ads, Twitter, and email marketing advertising (to name a few)
4. Conduct preliminary online market research & competition analysis
5. Assess audience interests, behaviours and demographics across platforms for every industry
6. Track and analyze website traffic flow to effectively optimize live ads
7. Work on effective keyword planning on the basis of search volumes, cost per clicks, bid values, etc.
8. Work on effective search engine optimization strategies (on-page and off-Page both) for websites
9. Monitor online marketing trends on social media
10. Prepare accurate in-depth reports on the overall performance of marketing campaigns.
11. Prepare pitch decks and proposals
12. Coordinate with content strategists, graphic designers, digital marketers and clients
13. Explore the complete gamut of the features of various social media platforms designed for marketers
Skill(s) required :
- Social Media Marketing, Digital Marketing, Search Engine Optimization (SEO), English Proficiency
Experience: 6-9 yrs
Location: NoidaJob Description:
- Must Have 3-4 Experience in SSIS, Mysql
- Good Experience in Tableau
- Experience in SQL Server.
- 1+ year of Experience in Tableau
- Knowledge of ETL Tool
- Knowledge of Dataware Housing

Our software developer for full stack role would be required to constantly work on building new features as per discussion with the product team.
Our front end code base is in Angular while the backend code base is in Python-Django.
Pre-requisites
- Expert level knowledge of Angular and Javascript
- Intermediate level knowledge of Python
Responsibilities
- Build new products / features from scratch
- Work on implementing and maintaing ticket booking flow on site
- Work on making existing features faster and better
- Work on reducing response time for all our apis
Qualifications & Skills
- Bachelor's degree or equivalent experience
- 1-6 years' experience in full stack development
- Hands-on with angular, python, django, mysql, mongodb, redis, rest-framework
- Ability to multi-task
- Strong verbal, written, and organizational skills
- Good analytical capabilities
- Mobile ecosystem knowledge
Trainman gets more than 5 lakh daily visits and the android app has more than 65 lakh downloads. The role would be challenging, so only those should apply who love working at scale.