Job Summary Good experience and exposure of cloud native architecture, development and deployment on public clouds AWS, Google Cloud etc Responsible for Linux server installation, maintain, monitoring, data backup and recovery, securtiy and administration Understanding of clusters, distributed architecture, container environment Experience in networking, including linux, software defined networking, network virtualization, open protocols, application acceleration and load balancing, DNS, virtual private networks Knowledge of several middleware such as MySQL, Apache etc Responsible for managing network storage Disaster recovery and incident response planning Configuring/monitoring firewalls, routers, switches and other network devices Responsibilities and Duties Support the globally distributed cloud development and teams by mainitaning the cloud infrastructure labs hosted in a hybrid cloud environment Contribute towards optimization of preformance and acost of running the labs
Qualifications:• Bachelor or Master Degree in Computer Science, Software Engineering from a reputedUniversity.• 5 - 8 Years of experience in building scalable, secure and compliant systems.• More than 2 years of experience in working with GCP deployment for millions of daily visitors• 5+ years hosting experience in a large heavy-traffic environment• 5+ years production application support experience in a high uptime environment• Software development and monitoring knowledge with Automated builds• Technology:o Cloud: AWS or Google Cloudo Source Control: Gitlab or Bitbucket or Githubo Container Concepts: Docker, Microserviceso Continuous Integration: Jenkins, Bambooso Infrastructure Automation: Puppet, Chef or Ansibleo Deployment Automation: Jenkins, VSTS or Octopus Deployo Orchestration: Kubernets, Mesos, Swarmo Automation: Node JS or Pythono Linux environment network administration, DNS, firewall and security management• Ability to be adapt to the startup culture, handle multiple competing priorities, meetdeadlines and troubleshoot problems.
DESCRIPTION :- We- re looking for an experienced Data Engineer to be part of our team who has a strong cloud technology experience to help our big data team to take our products to the next level.- This is a hands-on role, you will be required to code and develop the product in addition to your leadership role. You need to have a strong software development background and love to work with cutting edge big data platforms.- You are expected to bring with you extensive hands-on experience with Amazon Web Services (Kinesis streams, EMR, Redshift), Spark and other Big Data processing frameworks and technologies as well as advanced knowledge of RDBS and Data Warehousing solutions.REQUIREMENTS :- Strong background working on large scale Data Warehousing and Data processing solutions.- Strong Python and Spark programming experience.- Strong experience in building big data pipelines.- Very strong SQL skills are an absolute must.- Good knowledge of OO, functional and procedural programming paradigms.- Strong understanding of various design patterns.- Strong understanding of data structures and algorithms.- Strong experience with Linux operating systems.- At least 2+ years of experience working as a software developer or a data-driven environment.- Experience working in an agile environment.Lots of passion, motivation and drive to succeed!Highly desirable :- Understanding of agile principles specifically scrum.- Exposure to Google cloud platform services such as BigQuery, compute engine etc.- Docker, Puppet, Ansible, etc..- Understanding of digital marketing and digital advertising space would be advantageous.BENEFITS :Datalicious is a global data technology company that helps marketers improve customer journeys through the implementation of smart data-driven marketing strategies. Our team of marketing data specialists offer a wide range of skills suitable for any challenge and cover everything from web analytics to data engineering, data science and software development.Experience : Join us at any level and we promise you'll feel up-levelled in no time, thanks to the fast-paced, transparent and aggressive growth of DataliciousExposure : Work with ONLY the best clients in the Australian and SEA markets, every problem you solve would directly impact millions of real people at a large scale across industriesWork Culture : Voted as the Top 10 Tech Companies in Australia. Never a boring day at work, and we walk the talk. The CEO organises nerf-gun bouts in the middle of a hectic day.Money: We'd love to have a long term relationship because long term benefits are exponential. We encourage people to get technical certifications via online courses or digital schools.So if you are looking for the chance to work for an innovative, fast growing business that will give you exposure across a diverse range of the world's best clients, products and industry leading technologies, then Datalicious is the company for you!
Data Engineer: Pluto7 is a services and solutions company focused on building ML, Ai, Analytics, solutions to accelerate business transformation. We are a Premier Google Cloud Partner, servicing Retail, Manufacturing, Healthcare, and Hi-Tech industries.We’re seeking passionate people to work with us to change the way data is captured, accessed and processed, to make data driven insightful decisions. Must have skills : Hands-on experience in database systems (Structured and Unstructured). Programming in Python, R, SAS. Overall knowledge and exposure on how to architect solutions in cloud platforms like GCP, AWS, Microsoft Azure. Develop and maintain scalable data pipelines, with a focus on writing clean, fault-tolerant code. Hands-on experience in data model design, developing BigQuery/SQL (any variant) stored. Optimize data structures for efficient querying of those systems. Collaborate with internal and external data sources to ensure integrations are accurate, scalable and maintainable. Collaborate with business intelligence/analytics teams on data mart optimizations, query tuning and database design. Execute proof of concepts to assess strategic opportunities and future data extraction and integration capabilities. Must have at least 2 years of experience in building applications, solutions and products based on analytics. Data extraction, Data cleansing and transformation. Strong knowledge on REST APIs, Http Server, MVC architecture. Knowledge on continuous integration/continuous deployment. Preferred but not required: Machine learning and Deep learning experience Certification on any cloud platform is preferred. Experience of data migration from On-Prem to Cloud environment. Exceptional analytical, quantitative, problem-solving, and critical thinking skills Excellent verbal and written communication skills Work Location: Bangalore
Job Title: DevOps Engineer Work Experience: 3-7 years Qualification: B.E / M. Tech Location: Bangalore, India About Pramata Pramata’s unique, industry-proven offering combines the digitization of critical customer data currently locked in unstructured and obscure sources, then converts that data into high-quality, actionable information accessible through one or multiple applications through the Pramata cloud-based customer digitization platform. Pramata’s customers are some of the largest companies in the world including CenturyLink, Comcast Business, FICO, HPE, Microsoft, NCR, Novelis, and Truven Health IBM. Pramata has helped these companies and more find millions of dollars in revenue, ensure regulatory and pricing compliance, as well as enable risk identification and management across their customer, partner, and even supplier bases. Pramata is headquartered near San Francisco, California and has its Product Engineering and Solutions Delivery Center in Bangalore, India. How Pramata Works Pramata extracts essential intelligence about customer relationships from complex, negotiated contracts, simplifies it from legalese into plain English, synthesizes it with data from CRM, CLM, billing and other systems, and delivers it in the context of a particular user’s role and responsibilities. This is done through Pramata’s unique Digitization-as-a-Service (DaaS) process which transforms unstructured and diverse data into accurate, timely and meaningful digital information stored in the Pramata Digital Intelligence Hub. The Hub keeps the information centralized as one single, shared source of truth along with ensuring that this data remains consistent, accessible and highly secure. The opportunity - What you get to do You will be instrumental in bringing automation to development and testing pipelines, release management, configuration management, environment & application management and day-to-day support of development teams. You will manage the development of capabilities to achieve higher automation, quality and performance in automated build and deployment management, release management, on-demand environment configuration & automation, configuration and change management and in production environment support - Application monitoring, performance management and production support of mission-critical applications including application and system uptime and remote diagnostics - Security - Ensure that the highly sensitive data from our customers is secure at all times. - Instrument applications for performance baselines and to aid rapid diagnostics and resolution in case of system issues. - High availability and disaster recover - Build and maintain systems that are designed to provide 99.9% uptime and ensure that disaster recovery mechanisms are in place. - Automate provisioning and integration tasks as required to deploy new code. - Monitoring - Proactive steps to monitor complex interdependent systems to ensure that issues are being identified and addressed in real-time. Skills required: - Excellent communicator with great interpersonal skills, driving clarity about the intricate systems - Come with hands-on experience in application infrastructure technologies like Linux(RHEL), MySQL, Apache, Nginx, Phusion passenger, Redis etc. - Good understanding of software application builds, configuration management and deployments - Strong scripting skills like Shell, Ruby, Python, Perl etc. Comes with passion for automation - Comfortable with collaboration, open communication and reaching across functional borders. - Advanced problem-solving and task break-down ability. Additional Skills (Good to have but not mandatory): - In depth understanding and experience working with any Cloud Platforms (e.g: AWS, Azure, Google cloud etc) - Experience using configuration management tools like Chef, Puppet, Capistrano, Ansible, etc. - Being able to work under pressure and solve problems using an analytical approach; decisive, fast moving; and a positive attitude. Minimum Qualifications: - Bachelor’s Degree in Computer Science or a related field - Background in technology operations for Linux based applications with 2-4 years of experience in enterprise software - Strong programming skills in Python, Shell or Java - Experience with one or more of the following Configuration Management Tools: Ansible, Chef, Salt, Puppet - Experience with one or more of the following Databases: PostgreSQL, MySQL, Oracle, RDS
Data Scientist : Pluto7 is a services and solutions company focused on building ML, Ai, Analytics, and IoT tailored solutions to accelerate business transformation.We are a Premier Google Cloud Partner, servicing Retail, Manufacturing, Healthcare, and Hi-Tech industries. We are a Google premium partner in AI & ML, which means you'll have the opportunity to work and collaborate with folks from Google. Are you an innovator, have a passion to work with data and find insights, have the inquisitive mind with the constant yearning to learn new ideas; then we are looking for you.As a Pluto7 Data Scientist engineer, you will be one of the key members of our innovative artificial intelligence and machine learning team. You are expected to be unfazed with large volumes of data, love to apply various models, use technology to process and filter data for analysis. Responsibilities: Build and Optimize Machine Learning models. Work with large/complex datasets to solve difficult and non-routine analysis problems, applying advanced analytical methods as needed. Build and prototype data pipelines for analysis at scale. Work cross-functionally with Business Analysts and Data Engineers to help develop cutting edge and innovative artificial intelligence and machine learning models. Make recommendations for selections on machine learning models. Drive accuracy levels to the next stage of the given ML models. Experience in developing visualisation and User Good exposure in exploratory data analysis Strong experience in Statistics and ML algorithms. Minimum qualifications: 2+ years of relevant work experience in ML and advanced data analytics(e.g., as a Machine Learning Specialist / Data scientist ). Strong Experience using machine learning and artificial intelligence frameworks such as Tensorflow, sci-kit learn, Keras using python. Good in Python/R/SAS programming. Understanding of Cloud platforms like GCP, AWS, or other. Preferred qualifications: Work experience in building data pipelines to ingest, cleanse and transform data. Applied experience with machine learning on large datasets and experience translating analysis results into business recommendations. Demonstrated skills in selecting the right statistical tools given a data analysis problem. Demonstrated effective written and verbal communication skills. Demonstrated willingness to both teach others and learn new techniques Work location : Bangalore
About Company:- Tracxn is a Bangalore based product company providing a research and deal sourcing platform for Venture Capital, Private Equity, CorpDevs & professionals working around the startup ecosystem. We are a team of 750+ working professionals serving customers across the globe. Our clients include Funds like Andreessen Horowitz, Sequoia Capital, Accel Partners, NEA; and Large Corporates such as ING, Societe Generale, LG and Royal Bank of Canada. We are backed by prominent investors like Ratan Tata, Nandan Nilekani, and SAIF Partners Founders:- - Neha Singh (ex-Sequoia, BCG | MBA - Stanford GSB) - Abhishek Goyal (ex-Accel Partners, Amazon | BTech - IIT Kanpur) Roles and Responsibilities:- - Design, Develop and Deliver products and frameworks (like Queuing System, Schedulers, etc) that will be used across all the Engineering teams in Tracxn. - Evaluating, Benchmarking and rolling out platform components like API Gateway, Load Balancers etc. - Driving centralized solutions like Service Discovery, Rate limiting etc for teams across Tracxn. - Extend or develop frameworks on top of docker to solve Tracxn's needs for scaling. - Working with Application Development teams to refactor the apps or build new modules to help onboard new architectures. Skills and Experience: - Must have experience in building fault-tolerant and scalable infrastructure. - Must have good conceptual, architectural & design skills. - Must have experience in at least one of the programming languages such as Java, C#, Python, Shell Script, Bash Script. - Must have experience in any one of the following databases like RDBMS, NoSQL databases. - Must have a deep understanding of how the software works at the systems level, familiarity with low-level aspects of performance, multi-threading, performance analysis, and optimization. - Good to have experience in working with Cloud Platforms like AWS, Google Cloud etc. - Good to have experience in Docker, Kubernetes, Ansible, Chef, Puppet. - Good to have experience in frameworks such as Kafka, HAproxy, Nginx. - Team management experience will be an added bonus. What we have to offer? - Work with a performance oriented team driven by ownership. - Learn to design system for high accuracy, efficiency, and scalability. - Focus on delivering quality work rather than deadlines. - Meritocracy driven, candid culture. - Very high visibility regarding startups ecosystem. Above all, you love to build and ship products that Customers will use every day.
Required Skills: Strong experience in AWS / Google Cloud. Strong development experience in Perl, Python, Docker, and Postgres. Strong experience in build/release management. Working experience on Linux. Excellent knowledge of shell scripts. Knowledge of Virtualization Platforms VMware. Working experience on Configuration Management tools. Working experience on Test and Build Systems Jenkins/Maven Should have strong communication skills, a passion to learn, and an ability to work well with people at all levels of an organization. Roles and Responsibilities: Create Deployment Unit consisting build, documents and installation artifacts. Preparing Delivery definition / Release Note / Production turn-over Note documents. Establish DevOps Policies. Communicate with developers, product managers and technical support specialists on product issues. Assist in Creating and maintaining Configuration and Change Management Plan for the project. Choosing suitable DevOps tools. Setting up Configuration Management Environment. Assist in routine back-up and archival of project repository.