
BRIEF DESCRIPTION:
At-least 1 year of Python, Spark, SQL, data engineering experience
Primary Skillset: PySpark, Scala/Python/Spark, Azure Synapse, S3, RedShift/Snowflake
Relevant Experience: Legacy ETL job Migration to AWS Glue / Python & Spark combination
ROLE SCOPE:
Reverse engineer the existing/legacy ETL jobs
Create the workflow diagrams and review the logic diagrams with Tech Leads
Write equivalent logic in Python & Spark
Unit test the Glue jobs and certify the data loads before passing to system testing
Follow the best practices, enable appropriate audit & control mechanism
Analytically skillful, identify the root causes quickly and efficiently debug issues
Take ownership of the deliverables and support the deployments
REQUIREMENTS:
Create data pipelines for data integration into Cloud stacks eg. Azure Synapse
Code data processing jobs in Azure Synapse Analytics, Python, and Spark
Experience in dealing with structured, semi-structured, and unstructured data in batch and real-time environments.
Should be able to process .json, .parquet and .avro files
PREFERRED BACKGROUND:
Tier1/2 candidates from IIT/NIT/IIITs
However, relevant experience, learning attitude takes precedence

Similar jobs
The Sr. Analytics Engineer would provide technical expertise in needs identification, data modeling, data movement, and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective.
Understands and leverages best-fit technologies (e.g., traditional star schema structures, cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges.
Provides data understanding and coordinates data-related activities with other data management groups such as master data management, data governance, and metadata management.
Actively participates with other consultants in problem-solving and approach development.
Responsibilities :
Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs.
Perform data analysis to validate data models and to confirm the ability to meet business needs.
Assist with and support setting the data architecture direction, ensuring data architecture deliverables are developed, ensuring compliance to standards and guidelines, implementing the data architecture, and supporting technical developers at a project or business unit level.
Coordinate and consult with the Data Architect, project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels.
Work closely with Business Analysts and Solution Architects to design the data model satisfying the business needs and adhering to Enterprise Architecture.
Coordinate with Data Architects, Program Managers and participate in recurring meetings.
Help and mentor team members to understand the data model and subject areas.
Ensure that the team adheres to best practices and guidelines.
Requirements :
- Strong working knowledge of at least 3 years of Spark, Java/Scala/Pyspark, Kafka, Git, Unix / Linux, and ETL pipeline designing.
- Experience with Spark optimization/tuning/resource allocations
- Excellent understanding of IN memory distributed computing frameworks like Spark and its parameter tuning, writing optimized workflow sequences.
- Experience of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., Redshift, Bigquery, Cassandra, etc).
- Familiarity with Docker, Kubernetes, Azure Data Lake/Blob storage, AWS S3, Google Cloud storage, etc.
- Have a deep understanding of the various stacks and components of the Big Data ecosystem.
- Hands-on experience with Python is a huge plus


Prelude
We are BeyondScale, on a mission to build the mobile learning app to help organizations create internal courses for their workforce easily. eLearning is booming and we aim to tap into the under-served non-IT L&D market and make a difference in the livelihoods of millions of people.
We’re now looking for a passionate “Django Developer” who is willing to join us at an early stage and help us build a world-class product.
Django Developer
Job Description:
- 2+ years of experience coding with Python.
- Design, build, and maintain efficient, reusable, and reliable code.
- Eager and proactive to learn new technical skills.
- Hands-on experience in developing web APIs and writing database queries in PostgreSQL (MongoDB, MySQL and DynamoDB is a plus).
- Good understanding of OOPs, Multiprocessing and threading.
- Proficient in testing and debugging programs.
- Well-versed with Git and modern development workflow practices
10+ years of experience in SQL Server Administration
Should be hands on
Able to handle customer communication and expectation management
Proactively plan and implement the project/support activities
Focus on continuous improvements
Must have experience in managing team of SQL Server DBAs
should have good customer and client management skills
should have good communication skills



Mandatory Skills:-
C Programming and data structures.
Linux Internals ( System calls, IPC, Network Programming, POSIX Multi-thread programming)
Desirable Skills :-
C++ and OOPs knowledge
Linux system start-up sysvinit, systemd,
Bootloaders : uBoot
Message bus protocols like dbus
Basic Linux Device driver knowledge
Linux Build framework – Yocto-Bitbake, Makefiles
Python scripting
Debug using tools such as GDB
Version control - GIT and SVN

ROLES AND RESPONSIBILITIES
• Lead the Software Team. Ensure consistent deliveries of planned features while ensuring code quality, testing standards, and processes are maintained.
• Work with the leadership team to cultivate and grow the Internal Software Team Culture at
• Work across the full stack, building highly scalable distributed solutions that enable positive user experiences and measurable business growth
• Ensure application performance, uptime, and scale, maintaining high standards of code quality and thoughtful application design.
• Solve technical problems of high scope and complexity.
• Exert influence on the overall objectives and long-range goals of your team.
• Experience with performance and optimization problems, particularly at a large scale, and a demonstrated ability to both diagnose and prevent these problems.
• Help to define and improve our internal standards for style, maintainability, and best practices for a high-scale web environment. Maintain and advocate for these standards through code review.
CANDIDATES MUST HAVE
• Experienced in designing and integrating RESTful APIs
• Knowledge of Python
• Excellent debugging and optimization skills
DESIRED SKILLS & EXPERIENCE
• 3-5 years of experience building large-scale software applications and working with large software teams.
• Bachelor’s degree in computer science, information technology, or engineering • Experience designing and integrating RESTful APIs
• Knowledge of Python and Backend Development
• Experience building Web/Mobile applications
• Excellent debugging and optimization skills
• Unit and Integration testing experience
• Being knowledgeable about engineering processes and good practices
• Passionate about learning new tools. Ability to continuously learn and acquire knowledge.
• Able to adapt to changing complexity of tasks.
POSITION: Sales Executive
QUALIFICATION: Degree in Management, Business or Related Field
EXPERIENCE: Fresher/Experience
GENDER: Male
JOB DISCRIPTION:
- Faster Learning Ability passion for Sales
- Knowledge of CRM or Sales Management
- Negotiating Techniques and Marketing Tactics
LOCATION: Pondicherry
● Good knowledge of Dimensional data warehouse systems
● Reporting data models with SSIS and MS SQL Server
● Knowledge on all aspects of BI preferably data modeling, data
● integration, data analysis and reporting
● Good working knowledge of MS Azure environment
● Knowledge of BI tools like Tableau or PowerBI
● Good communication skills
Comes with the opportunity to upskill on Big Data cluster
Responsibilities
- Design and implement features spanning across systems in a manner that satisfies requirements of performance, scale, security and robustness
- Contribute to improvement of technology and execution processes in the company
- Collaborate in setting technology standards with technology leaders across the company
- Serve as a knowledge center on current and emerging technologies, and help train others when required
- Foster a strong technical culture by mentoring other engineers
- Keep up to date with the latest technologies, evaluate new tools
Relevant Experience and Qualifications
- Demonstrated expertise in system-level design of large scale distributed systems on the Java stack, and experience with web services and service oriented architectures
- Solid experience with Spring (Core/Boot/Security/MVC/Data)
- Have 8 - 12 years of professional experience in software development.
- Extensive experience with modern open source systems including relational / non relational data stores and bigdata processing
- Experience with AWS stack, at least the common datastores and services
- Experience with containerization technologies and concepts including Docker
- Basic knowledge about security concepts and secure coding
- Excellent analytical, conceptual and communication skills in spoken and written English.
- Experience leading projects developed across continents.
Great to have Experience and Qualifications
- Knowledge of machine learning concepts, and some hands-on experience implementing machine learnt models
- Experience with infrastructure design for cloud based apps, especially AWS.
- Past experience with information and data security standards (PCI-DSS, ISO27000) very nice to have.

